Feb 18 08:55:04 np0005623263 kernel: Linux version 5.14.0-681.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Wed Feb 11 20:19:22 UTC 2026
Feb 18 08:55:04 np0005623263 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 18 08:55:04 np0005623263 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 18 08:55:04 np0005623263 kernel: BIOS-provided physical RAM map:
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 18 08:55:04 np0005623263 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 18 08:55:04 np0005623263 kernel: NX (Execute Disable) protection: active
Feb 18 08:55:04 np0005623263 kernel: APIC: Static calls initialized
Feb 18 08:55:04 np0005623263 kernel: SMBIOS 2.8 present.
Feb 18 08:55:04 np0005623263 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 18 08:55:04 np0005623263 kernel: Hypervisor detected: KVM
Feb 18 08:55:04 np0005623263 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 18 08:55:04 np0005623263 kernel: kvm-clock: using sched offset of 9643488638 cycles
Feb 18 08:55:04 np0005623263 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 18 08:55:04 np0005623263 kernel: tsc: Detected 2800.000 MHz processor
Feb 18 08:55:04 np0005623263 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 18 08:55:04 np0005623263 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 18 08:55:04 np0005623263 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 18 08:55:04 np0005623263 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 18 08:55:04 np0005623263 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 18 08:55:04 np0005623263 kernel: Using GB pages for direct mapping
Feb 18 08:55:04 np0005623263 kernel: RAMDISK: [mem 0x1b6f6000-0x29b72fff]
Feb 18 08:55:04 np0005623263 kernel: ACPI: Early table checksum verification disabled
Feb 18 08:55:04 np0005623263 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 18 08:55:04 np0005623263 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 18 08:55:04 np0005623263 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 18 08:55:04 np0005623263 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 18 08:55:04 np0005623263 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 18 08:55:04 np0005623263 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 18 08:55:04 np0005623263 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 18 08:55:04 np0005623263 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 18 08:55:04 np0005623263 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 18 08:55:04 np0005623263 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 18 08:55:04 np0005623263 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 18 08:55:04 np0005623263 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 18 08:55:04 np0005623263 kernel: No NUMA configuration found
Feb 18 08:55:04 np0005623263 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 18 08:55:04 np0005623263 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 18 08:55:04 np0005623263 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 18 08:55:04 np0005623263 kernel: Zone ranges:
Feb 18 08:55:04 np0005623263 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 18 08:55:04 np0005623263 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 18 08:55:04 np0005623263 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 18 08:55:04 np0005623263 kernel:  Device   empty
Feb 18 08:55:04 np0005623263 kernel: Movable zone start for each node
Feb 18 08:55:04 np0005623263 kernel: Early memory node ranges
Feb 18 08:55:04 np0005623263 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 18 08:55:04 np0005623263 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 18 08:55:04 np0005623263 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 18 08:55:04 np0005623263 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 18 08:55:04 np0005623263 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 18 08:55:04 np0005623263 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 18 08:55:04 np0005623263 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 18 08:55:04 np0005623263 kernel: ACPI: PM-Timer IO Port: 0x608
Feb 18 08:55:04 np0005623263 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 18 08:55:04 np0005623263 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 18 08:55:04 np0005623263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 18 08:55:04 np0005623263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 18 08:55:04 np0005623263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 18 08:55:04 np0005623263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 18 08:55:04 np0005623263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 18 08:55:04 np0005623263 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 18 08:55:04 np0005623263 kernel: TSC deadline timer available
Feb 18 08:55:04 np0005623263 kernel: CPU topo: Max. logical packages:   8
Feb 18 08:55:04 np0005623263 kernel: CPU topo: Max. logical dies:       8
Feb 18 08:55:04 np0005623263 kernel: CPU topo: Max. dies per package:   1
Feb 18 08:55:04 np0005623263 kernel: CPU topo: Max. threads per core:   1
Feb 18 08:55:04 np0005623263 kernel: CPU topo: Num. cores per package:     1
Feb 18 08:55:04 np0005623263 kernel: CPU topo: Num. threads per package:   1
Feb 18 08:55:04 np0005623263 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 18 08:55:04 np0005623263 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 18 08:55:04 np0005623263 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 18 08:55:04 np0005623263 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 18 08:55:04 np0005623263 kernel: Booting paravirtualized kernel on KVM
Feb 18 08:55:04 np0005623263 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 18 08:55:04 np0005623263 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 18 08:55:04 np0005623263 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 18 08:55:04 np0005623263 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 18 08:55:04 np0005623263 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 18 08:55:04 np0005623263 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64", will be passed to user space.
Feb 18 08:55:04 np0005623263 kernel: random: crng init done
Feb 18 08:55:04 np0005623263 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: Fallback order for Node 0: 0 
Feb 18 08:55:04 np0005623263 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 18 08:55:04 np0005623263 kernel: Policy zone: Normal
Feb 18 08:55:04 np0005623263 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 18 08:55:04 np0005623263 kernel: software IO TLB: area num 8.
Feb 18 08:55:04 np0005623263 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 18 08:55:04 np0005623263 kernel: ftrace: allocating 49565 entries in 194 pages
Feb 18 08:55:04 np0005623263 kernel: ftrace: allocated 194 pages with 3 groups
Feb 18 08:55:04 np0005623263 kernel: Dynamic Preempt: voluntary
Feb 18 08:55:04 np0005623263 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 18 08:55:04 np0005623263 kernel: rcu: #011RCU event tracing is enabled.
Feb 18 08:55:04 np0005623263 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 18 08:55:04 np0005623263 kernel: #011Trampoline variant of Tasks RCU enabled.
Feb 18 08:55:04 np0005623263 kernel: #011Rude variant of Tasks RCU enabled.
Feb 18 08:55:04 np0005623263 kernel: #011Tracing variant of Tasks RCU enabled.
Feb 18 08:55:04 np0005623263 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 18 08:55:04 np0005623263 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 18 08:55:04 np0005623263 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 18 08:55:04 np0005623263 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 18 08:55:04 np0005623263 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 18 08:55:04 np0005623263 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 18 08:55:04 np0005623263 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 18 08:55:04 np0005623263 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 18 08:55:04 np0005623263 kernel: Console: colour VGA+ 80x25
Feb 18 08:55:04 np0005623263 kernel: printk: console [ttyS0] enabled
Feb 18 08:55:04 np0005623263 kernel: ACPI: Core revision 20230331
Feb 18 08:55:04 np0005623263 kernel: APIC: Switch to symmetric I/O mode setup
Feb 18 08:55:04 np0005623263 kernel: x2apic enabled
Feb 18 08:55:04 np0005623263 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 18 08:55:04 np0005623263 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 18 08:55:04 np0005623263 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 18 08:55:04 np0005623263 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 18 08:55:04 np0005623263 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 18 08:55:04 np0005623263 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 18 08:55:04 np0005623263 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 18 08:55:04 np0005623263 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 18 08:55:04 np0005623263 kernel: Spectre V2 : Mitigation: Retpolines
Feb 18 08:55:04 np0005623263 kernel: RETBleed: Mitigation: untrained return thunk
Feb 18 08:55:04 np0005623263 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 18 08:55:04 np0005623263 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 18 08:55:04 np0005623263 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 18 08:55:04 np0005623263 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 18 08:55:04 np0005623263 kernel: active return thunk: retbleed_return_thunk
Feb 18 08:55:04 np0005623263 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 18 08:55:04 np0005623263 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 18 08:55:04 np0005623263 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 18 08:55:04 np0005623263 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 18 08:55:04 np0005623263 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 18 08:55:04 np0005623263 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 18 08:55:04 np0005623263 kernel: Freeing SMP alternatives memory: 40K
Feb 18 08:55:04 np0005623263 kernel: pid_max: default: 32768 minimum: 301
Feb 18 08:55:04 np0005623263 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 18 08:55:04 np0005623263 kernel: landlock: Up and running.
Feb 18 08:55:04 np0005623263 kernel: Yama: becoming mindful.
Feb 18 08:55:04 np0005623263 kernel: SELinux:  Initializing.
Feb 18 08:55:04 np0005623263 kernel: LSM support for eBPF active
Feb 18 08:55:04 np0005623263 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 18 08:55:04 np0005623263 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 18 08:55:04 np0005623263 kernel: ... version:                0
Feb 18 08:55:04 np0005623263 kernel: ... bit width:              48
Feb 18 08:55:04 np0005623263 kernel: ... generic registers:      6
Feb 18 08:55:04 np0005623263 kernel: ... value mask:             0000ffffffffffff
Feb 18 08:55:04 np0005623263 kernel: ... max period:             00007fffffffffff
Feb 18 08:55:04 np0005623263 kernel: ... fixed-purpose events:   0
Feb 18 08:55:04 np0005623263 kernel: ... event mask:             000000000000003f
Feb 18 08:55:04 np0005623263 kernel: signal: max sigframe size: 1776
Feb 18 08:55:04 np0005623263 kernel: rcu: Hierarchical SRCU implementation.
Feb 18 08:55:04 np0005623263 kernel: rcu: #011Max phase no-delay instances is 400.
Feb 18 08:55:04 np0005623263 kernel: smp: Bringing up secondary CPUs ...
Feb 18 08:55:04 np0005623263 kernel: smpboot: x86: Booting SMP configuration:
Feb 18 08:55:04 np0005623263 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 18 08:55:04 np0005623263 kernel: smp: Brought up 1 node, 8 CPUs
Feb 18 08:55:04 np0005623263 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 18 08:55:04 np0005623263 kernel: node 0 deferred pages initialised in 9ms
Feb 18 08:55:04 np0005623263 kernel: Memory: 7617772K/8388068K available (16384K kernel code, 5795K rwdata, 13948K rodata, 4204K init, 7180K bss, 764376K reserved, 0K cma-reserved)
Feb 18 08:55:04 np0005623263 kernel: devtmpfs: initialized
Feb 18 08:55:04 np0005623263 kernel: x86/mm: Memory block size: 128MB
Feb 18 08:55:04 np0005623263 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 18 08:55:04 np0005623263 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 18 08:55:04 np0005623263 kernel: pinctrl core: initialized pinctrl subsystem
Feb 18 08:55:04 np0005623263 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 18 08:55:04 np0005623263 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 18 08:55:04 np0005623263 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 18 08:55:04 np0005623263 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 18 08:55:04 np0005623263 kernel: audit: initializing netlink subsys (disabled)
Feb 18 08:55:04 np0005623263 kernel: audit: type=2000 audit(1771422903.450:1): state=initialized audit_enabled=0 res=1
Feb 18 08:55:04 np0005623263 kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 18 08:55:04 np0005623263 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 18 08:55:04 np0005623263 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 18 08:55:04 np0005623263 kernel: cpuidle: using governor menu
Feb 18 08:55:04 np0005623263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 18 08:55:04 np0005623263 kernel: PCI: Using configuration type 1 for base access
Feb 18 08:55:04 np0005623263 kernel: PCI: Using configuration type 1 for extended access
Feb 18 08:55:04 np0005623263 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 18 08:55:04 np0005623263 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 18 08:55:04 np0005623263 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 18 08:55:04 np0005623263 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 18 08:55:04 np0005623263 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 18 08:55:04 np0005623263 kernel: Demotion targets for Node 0: null
Feb 18 08:55:04 np0005623263 kernel: cryptd: max_cpu_qlen set to 1000
Feb 18 08:55:04 np0005623263 kernel: ACPI: Added _OSI(Module Device)
Feb 18 08:55:04 np0005623263 kernel: ACPI: Added _OSI(Processor Device)
Feb 18 08:55:04 np0005623263 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 18 08:55:04 np0005623263 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 18 08:55:04 np0005623263 kernel: ACPI: Interpreter enabled
Feb 18 08:55:04 np0005623263 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 18 08:55:04 np0005623263 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 18 08:55:04 np0005623263 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 18 08:55:04 np0005623263 kernel: PCI: Using E820 reservations for host bridge windows
Feb 18 08:55:04 np0005623263 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 18 08:55:04 np0005623263 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 18 08:55:04 np0005623263 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [3] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [4] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [5] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [6] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [7] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [8] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [9] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [10] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [11] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [12] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [13] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [14] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [15] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [16] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [17] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [18] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [19] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [20] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [21] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [22] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [23] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [24] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [25] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [26] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [27] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [28] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [29] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [30] registered
Feb 18 08:55:04 np0005623263 kernel: acpiphp: Slot [31] registered
Feb 18 08:55:04 np0005623263 kernel: PCI host bridge to bus 0000:00
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 18 08:55:04 np0005623263 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 18 08:55:04 np0005623263 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 18 08:55:04 np0005623263 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 18 08:55:04 np0005623263 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 18 08:55:04 np0005623263 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 18 08:55:04 np0005623263 kernel: iommu: Default domain type: Translated
Feb 18 08:55:04 np0005623263 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 18 08:55:04 np0005623263 kernel: SCSI subsystem initialized
Feb 18 08:55:04 np0005623263 kernel: ACPI: bus type USB registered
Feb 18 08:55:04 np0005623263 kernel: usbcore: registered new interface driver usbfs
Feb 18 08:55:04 np0005623263 kernel: usbcore: registered new interface driver hub
Feb 18 08:55:04 np0005623263 kernel: usbcore: registered new device driver usb
Feb 18 08:55:04 np0005623263 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 18 08:55:04 np0005623263 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 18 08:55:04 np0005623263 kernel: PTP clock support registered
Feb 18 08:55:04 np0005623263 kernel: EDAC MC: Ver: 3.0.0
Feb 18 08:55:04 np0005623263 kernel: NetLabel: Initializing
Feb 18 08:55:04 np0005623263 kernel: NetLabel:  domain hash size = 128
Feb 18 08:55:04 np0005623263 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 18 08:55:04 np0005623263 kernel: NetLabel:  unlabeled traffic allowed by default
Feb 18 08:55:04 np0005623263 kernel: PCI: Using ACPI for IRQ routing
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 18 08:55:04 np0005623263 kernel: vgaarb: loaded
Feb 18 08:55:04 np0005623263 kernel: clocksource: Switched to clocksource kvm-clock
Feb 18 08:55:04 np0005623263 kernel: VFS: Disk quotas dquot_6.6.0
Feb 18 08:55:04 np0005623263 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 18 08:55:04 np0005623263 kernel: pnp: PnP ACPI init
Feb 18 08:55:04 np0005623263 kernel: pnp: PnP ACPI: found 5 devices
Feb 18 08:55:04 np0005623263 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 18 08:55:04 np0005623263 kernel: NET: Registered PF_INET protocol family
Feb 18 08:55:04 np0005623263 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 18 08:55:04 np0005623263 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 18 08:55:04 np0005623263 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 18 08:55:04 np0005623263 kernel: NET: Registered PF_XDP protocol family
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 18 08:55:04 np0005623263 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 18 08:55:04 np0005623263 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 18 08:55:04 np0005623263 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 57250 usecs
Feb 18 08:55:04 np0005623263 kernel: PCI: CLS 0 bytes, default 64
Feb 18 08:55:04 np0005623263 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 18 08:55:04 np0005623263 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 18 08:55:04 np0005623263 kernel: ACPI: bus type thunderbolt registered
Feb 18 08:55:04 np0005623263 kernel: Trying to unpack rootfs image as initramfs...
Feb 18 08:55:04 np0005623263 kernel: Initialise system trusted keyrings
Feb 18 08:55:04 np0005623263 kernel: Key type blacklist registered
Feb 18 08:55:04 np0005623263 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 18 08:55:04 np0005623263 kernel: zbud: loaded
Feb 18 08:55:04 np0005623263 kernel: integrity: Platform Keyring initialized
Feb 18 08:55:04 np0005623263 kernel: integrity: Machine keyring initialized
Feb 18 08:55:04 np0005623263 kernel: Freeing initrd memory: 233972K
Feb 18 08:55:04 np0005623263 kernel: NET: Registered PF_ALG protocol family
Feb 18 08:55:04 np0005623263 kernel: xor: automatically using best checksumming function   avx       
Feb 18 08:55:04 np0005623263 kernel: Key type asymmetric registered
Feb 18 08:55:04 np0005623263 kernel: Asymmetric key parser 'x509' registered
Feb 18 08:55:04 np0005623263 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 18 08:55:04 np0005623263 kernel: io scheduler mq-deadline registered
Feb 18 08:55:04 np0005623263 kernel: io scheduler kyber registered
Feb 18 08:55:04 np0005623263 kernel: io scheduler bfq registered
Feb 18 08:55:04 np0005623263 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 18 08:55:04 np0005623263 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 18 08:55:04 np0005623263 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 18 08:55:04 np0005623263 kernel: ACPI: button: Power Button [PWRF]
Feb 18 08:55:04 np0005623263 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 18 08:55:04 np0005623263 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 18 08:55:04 np0005623263 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 18 08:55:04 np0005623263 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 18 08:55:04 np0005623263 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 18 08:55:04 np0005623263 kernel: Non-volatile memory driver v1.3
Feb 18 08:55:04 np0005623263 kernel: rdac: device handler registered
Feb 18 08:55:04 np0005623263 kernel: hp_sw: device handler registered
Feb 18 08:55:04 np0005623263 kernel: emc: device handler registered
Feb 18 08:55:04 np0005623263 kernel: alua: device handler registered
Feb 18 08:55:04 np0005623263 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 18 08:55:04 np0005623263 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 18 08:55:04 np0005623263 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 18 08:55:04 np0005623263 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 18 08:55:04 np0005623263 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 18 08:55:04 np0005623263 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 18 08:55:04 np0005623263 kernel: usb usb1: Product: UHCI Host Controller
Feb 18 08:55:04 np0005623263 kernel: usb usb1: Manufacturer: Linux 5.14.0-681.el9.x86_64 uhci_hcd
Feb 18 08:55:04 np0005623263 kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 18 08:55:04 np0005623263 kernel: hub 1-0:1.0: USB hub found
Feb 18 08:55:04 np0005623263 kernel: hub 1-0:1.0: 2 ports detected
Feb 18 08:55:04 np0005623263 kernel: usbcore: registered new interface driver usbserial_generic
Feb 18 08:55:04 np0005623263 kernel: usbserial: USB Serial support registered for generic
Feb 18 08:55:04 np0005623263 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 18 08:55:04 np0005623263 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 18 08:55:04 np0005623263 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 18 08:55:04 np0005623263 kernel: mousedev: PS/2 mouse device common for all mice
Feb 18 08:55:04 np0005623263 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 18 08:55:04 np0005623263 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 18 08:55:04 np0005623263 kernel: rtc_cmos 00:04: registered as rtc0
Feb 18 08:55:04 np0005623263 kernel: rtc_cmos 00:04: setting system clock to 2026-02-18T13:55:03 UTC (1771422903)
Feb 18 08:55:04 np0005623263 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 18 08:55:04 np0005623263 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 18 08:55:04 np0005623263 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 18 08:55:04 np0005623263 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 18 08:55:04 np0005623263 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 18 08:55:04 np0005623263 kernel: usbcore: registered new interface driver usbhid
Feb 18 08:55:04 np0005623263 kernel: usbhid: USB HID core driver
Feb 18 08:55:04 np0005623263 kernel: drop_monitor: Initializing network drop monitor service
Feb 18 08:55:04 np0005623263 kernel: Initializing XFRM netlink socket
Feb 18 08:55:04 np0005623263 kernel: NET: Registered PF_INET6 protocol family
Feb 18 08:55:04 np0005623263 kernel: Segment Routing with IPv6
Feb 18 08:55:04 np0005623263 kernel: NET: Registered PF_PACKET protocol family
Feb 18 08:55:04 np0005623263 kernel: mpls_gso: MPLS GSO support
Feb 18 08:55:04 np0005623263 kernel: IPI shorthand broadcast: enabled
Feb 18 08:55:04 np0005623263 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 18 08:55:04 np0005623263 kernel: AES CTR mode by8 optimization enabled
Feb 18 08:55:04 np0005623263 kernel: sched_clock: Marking stable (1054011379, 147940640)->(1300251949, -98299930)
Feb 18 08:55:04 np0005623263 kernel: registered taskstats version 1
Feb 18 08:55:04 np0005623263 kernel: Loading compiled-in X.509 certificates
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4fde1469d8033882223ec575c1a1c1b88c9a497b'
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 18 08:55:04 np0005623263 kernel: Demotion targets for Node 0: null
Feb 18 08:55:04 np0005623263 kernel: page_owner is disabled
Feb 18 08:55:04 np0005623263 kernel: Key type .fscrypt registered
Feb 18 08:55:04 np0005623263 kernel: Key type fscrypt-provisioning registered
Feb 18 08:55:04 np0005623263 kernel: Key type big_key registered
Feb 18 08:55:04 np0005623263 kernel: Key type encrypted registered
Feb 18 08:55:04 np0005623263 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 18 08:55:04 np0005623263 kernel: Loading compiled-in module X.509 certificates
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4fde1469d8033882223ec575c1a1c1b88c9a497b'
Feb 18 08:55:04 np0005623263 kernel: ima: Allocated hash algorithm: sha256
Feb 18 08:55:04 np0005623263 kernel: ima: No architecture policies found
Feb 18 08:55:04 np0005623263 kernel: evm: Initialising EVM extended attributes:
Feb 18 08:55:04 np0005623263 kernel: evm: security.selinux
Feb 18 08:55:04 np0005623263 kernel: evm: security.SMACK64 (disabled)
Feb 18 08:55:04 np0005623263 kernel: evm: security.SMACK64EXEC (disabled)
Feb 18 08:55:04 np0005623263 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 18 08:55:04 np0005623263 kernel: evm: security.SMACK64MMAP (disabled)
Feb 18 08:55:04 np0005623263 kernel: evm: security.apparmor (disabled)
Feb 18 08:55:04 np0005623263 kernel: evm: security.ima
Feb 18 08:55:04 np0005623263 kernel: evm: security.capability
Feb 18 08:55:04 np0005623263 kernel: evm: HMAC attrs: 0x1
Feb 18 08:55:04 np0005623263 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 18 08:55:04 np0005623263 kernel: Running certificate verification RSA selftest
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 18 08:55:04 np0005623263 kernel: Running certificate verification ECDSA selftest
Feb 18 08:55:04 np0005623263 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 18 08:55:04 np0005623263 kernel: clk: Disabling unused clocks
Feb 18 08:55:04 np0005623263 kernel: Freeing unused decrypted memory: 2028K
Feb 18 08:55:04 np0005623263 kernel: Freeing unused kernel image (initmem) memory: 4204K
Feb 18 08:55:04 np0005623263 kernel: Write protecting the kernel read-only data: 30720k
Feb 18 08:55:04 np0005623263 kernel: Freeing unused kernel image (rodata/data gap) memory: 388K
Feb 18 08:55:04 np0005623263 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 18 08:55:04 np0005623263 kernel: Run /init as init process
Feb 18 08:55:04 np0005623263 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 18 08:55:04 np0005623263 systemd: Detected virtualization kvm.
Feb 18 08:55:04 np0005623263 systemd: Detected architecture x86-64.
Feb 18 08:55:04 np0005623263 systemd: Running in initrd.
Feb 18 08:55:04 np0005623263 systemd: No hostname configured, using default hostname.
Feb 18 08:55:04 np0005623263 systemd: Hostname set to <localhost>.
Feb 18 08:55:04 np0005623263 systemd: Initializing machine ID from VM UUID.
Feb 18 08:55:04 np0005623263 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 18 08:55:04 np0005623263 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 18 08:55:04 np0005623263 kernel: usb 1-1: Product: QEMU USB Tablet
Feb 18 08:55:04 np0005623263 kernel: usb 1-1: Manufacturer: QEMU
Feb 18 08:55:04 np0005623263 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 18 08:55:04 np0005623263 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 18 08:55:04 np0005623263 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 18 08:55:04 np0005623263 systemd: Queued start job for default target Initrd Default Target.
Feb 18 08:55:04 np0005623263 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb 18 08:55:04 np0005623263 systemd: Reached target Local Encrypted Volumes.
Feb 18 08:55:04 np0005623263 systemd: Reached target Initrd /usr File System.
Feb 18 08:55:04 np0005623263 systemd: Reached target Local File Systems.
Feb 18 08:55:04 np0005623263 systemd: Reached target Path Units.
Feb 18 08:55:04 np0005623263 systemd: Reached target Slice Units.
Feb 18 08:55:04 np0005623263 systemd: Reached target Swaps.
Feb 18 08:55:04 np0005623263 systemd: Reached target Timer Units.
Feb 18 08:55:04 np0005623263 systemd: Listening on D-Bus System Message Bus Socket.
Feb 18 08:55:04 np0005623263 systemd: Listening on Journal Socket (/dev/log).
Feb 18 08:55:04 np0005623263 systemd: Listening on Journal Socket.
Feb 18 08:55:04 np0005623263 systemd: Listening on udev Control Socket.
Feb 18 08:55:04 np0005623263 systemd: Listening on udev Kernel Socket.
Feb 18 08:55:04 np0005623263 systemd: Reached target Socket Units.
Feb 18 08:55:04 np0005623263 systemd: Starting Create List of Static Device Nodes...
Feb 18 08:55:04 np0005623263 systemd: Starting Journal Service...
Feb 18 08:55:04 np0005623263 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 18 08:55:04 np0005623263 systemd: Starting Apply Kernel Variables...
Feb 18 08:55:04 np0005623263 systemd: Starting Create System Users...
Feb 18 08:55:04 np0005623263 systemd: Starting Setup Virtual Console...
Feb 18 08:55:04 np0005623263 systemd: Finished Create List of Static Device Nodes.
Feb 18 08:55:04 np0005623263 systemd: Finished Apply Kernel Variables.
Feb 18 08:55:04 np0005623263 systemd: Finished Create System Users.
Feb 18 08:55:04 np0005623263 systemd-journald[308]: Journal started
Feb 18 08:55:04 np0005623263 systemd-journald[308]: Runtime Journal (/run/log/journal/022190eb356a449ebedd18333ca89982) is 8.0M, max 153.6M, 145.6M free.
Feb 18 08:55:04 np0005623263 systemd-sysusers[312]: Creating group 'users' with GID 100.
Feb 18 08:55:04 np0005623263 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Feb 18 08:55:04 np0005623263 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 18 08:55:04 np0005623263 systemd: Started Journal Service.
Feb 18 08:55:04 np0005623263 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 18 08:55:04 np0005623263 systemd[1]: Starting Create Volatile Files and Directories...
Feb 18 08:55:04 np0005623263 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 18 08:55:04 np0005623263 systemd[1]: Finished Create Volatile Files and Directories.
Feb 18 08:55:04 np0005623263 systemd[1]: Finished Setup Virtual Console.
Feb 18 08:55:04 np0005623263 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 18 08:55:04 np0005623263 systemd[1]: Starting dracut cmdline hook...
Feb 18 08:55:04 np0005623263 dracut-cmdline[329]: dracut-9 dracut-057-110.git20260130.el9
Feb 18 08:55:04 np0005623263 dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-681.el9.x86_64 root=UUID=9d578f93-c4e9-4172-8459-ef150e54751c ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 18 08:55:04 np0005623263 systemd[1]: Finished dracut cmdline hook.
Feb 18 08:55:04 np0005623263 systemd[1]: Starting dracut pre-udev hook...
Feb 18 08:55:04 np0005623263 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 18 08:55:04 np0005623263 kernel: device-mapper: uevent: version 1.0.3
Feb 18 08:55:04 np0005623263 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 18 08:55:04 np0005623263 kernel: RPC: Registered named UNIX socket transport module.
Feb 18 08:55:04 np0005623263 kernel: RPC: Registered udp transport module.
Feb 18 08:55:04 np0005623263 kernel: RPC: Registered tcp transport module.
Feb 18 08:55:04 np0005623263 kernel: RPC: Registered tcp-with-tls transport module.
Feb 18 08:55:04 np0005623263 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 18 08:55:04 np0005623263 rpc.statd[446]: Version 2.5.4 starting
Feb 18 08:55:04 np0005623263 rpc.statd[446]: Initializing NSM state
Feb 18 08:55:04 np0005623263 rpc.idmapd[451]: Setting log level to 0
Feb 18 08:55:04 np0005623263 systemd[1]: Finished dracut pre-udev hook.
Feb 18 08:55:04 np0005623263 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 18 08:55:04 np0005623263 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Feb 18 08:55:04 np0005623263 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 18 08:55:04 np0005623263 systemd[1]: Starting dracut pre-trigger hook...
Feb 18 08:55:04 np0005623263 systemd[1]: Finished dracut pre-trigger hook.
Feb 18 08:55:04 np0005623263 systemd[1]: Starting Coldplug All udev Devices...
Feb 18 08:55:04 np0005623263 systemd[1]: Created slice Slice /system/modprobe.
Feb 18 08:55:04 np0005623263 systemd[1]: Starting Load Kernel Module configfs...
Feb 18 08:55:04 np0005623263 systemd[1]: Finished Coldplug All udev Devices.
Feb 18 08:55:04 np0005623263 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 18 08:55:04 np0005623263 systemd[1]: Finished Load Kernel Module configfs.
Feb 18 08:55:04 np0005623263 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 18 08:55:04 np0005623263 systemd[1]: Reached target Network.
Feb 18 08:55:04 np0005623263 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 18 08:55:04 np0005623263 systemd[1]: Starting dracut initqueue hook...
Feb 18 08:55:04 np0005623263 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 18 08:55:04 np0005623263 systemd-udevd[490]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 08:55:04 np0005623263 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 18 08:55:04 np0005623263 kernel: vda: vda1
Feb 18 08:55:04 np0005623263 systemd[1]: Found device /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c.
Feb 18 08:55:04 np0005623263 kernel: ACPI: bus type drm_connector registered
Feb 18 08:55:04 np0005623263 kernel: scsi host0: ata_piix
Feb 18 08:55:04 np0005623263 kernel: scsi host1: ata_piix
Feb 18 08:55:04 np0005623263 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 18 08:55:04 np0005623263 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 18 08:55:04 np0005623263 systemd[1]: Reached target Initrd Root Device.
Feb 18 08:55:04 np0005623263 kernel: ata1: found unknown device (class 0)
Feb 18 08:55:04 np0005623263 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 18 08:55:04 np0005623263 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 18 08:55:04 np0005623263 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 18 08:55:04 np0005623263 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 18 08:55:04 np0005623263 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 18 08:55:04 np0005623263 kernel: Console: switching to colour dummy device 80x25
Feb 18 08:55:04 np0005623263 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 18 08:55:04 np0005623263 kernel: [drm] features: -context_init
Feb 18 08:55:04 np0005623263 kernel: [drm] number of scanouts: 1
Feb 18 08:55:04 np0005623263 kernel: [drm] number of cap sets: 0
Feb 18 08:55:04 np0005623263 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 18 08:55:04 np0005623263 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 18 08:55:04 np0005623263 kernel: Console: switching to colour frame buffer device 128x48
Feb 18 08:55:04 np0005623263 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 18 08:55:04 np0005623263 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 18 08:55:04 np0005623263 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 18 08:55:05 np0005623263 systemd[1]: Mounting Kernel Configuration File System...
Feb 18 08:55:05 np0005623263 systemd[1]: Mounted Kernel Configuration File System.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target System Initialization.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Basic System.
Feb 18 08:55:05 np0005623263 systemd[1]: Finished dracut initqueue hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Preparation for Remote File Systems.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Remote Encrypted Volumes.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Remote File Systems.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting dracut pre-mount hook...
Feb 18 08:55:05 np0005623263 systemd[1]: Finished dracut pre-mount hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting File System Check on /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c...
Feb 18 08:55:05 np0005623263 systemd-fsck[566]: /usr/sbin/fsck.xfs: XFS file system.
Feb 18 08:55:05 np0005623263 systemd[1]: Finished File System Check on /dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c.
Feb 18 08:55:05 np0005623263 systemd[1]: Mounting /sysroot...
Feb 18 08:55:05 np0005623263 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 18 08:55:05 np0005623263 kernel: XFS (vda1): Mounting V5 Filesystem 9d578f93-c4e9-4172-8459-ef150e54751c
Feb 18 08:55:05 np0005623263 kernel: XFS (vda1): Ending clean mount
Feb 18 08:55:05 np0005623263 systemd[1]: Mounted /sysroot.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Initrd Root File System.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 18 08:55:05 np0005623263 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Initrd File Systems.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Initrd Default Target.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting dracut mount hook...
Feb 18 08:55:05 np0005623263 systemd[1]: Finished dracut mount hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 18 08:55:05 np0005623263 rpc.idmapd[451]: exiting on signal 15
Feb 18 08:55:05 np0005623263 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Network.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Timer Units.
Feb 18 08:55:05 np0005623263 systemd[1]: dbus.socket: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 18 08:55:05 np0005623263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Initrd Default Target.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Basic System.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Initrd Root Device.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Initrd /usr File System.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Path Units.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Remote File Systems.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Slice Units.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Socket Units.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target System Initialization.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Local File Systems.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Swaps.
Feb 18 08:55:05 np0005623263 systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped dracut mount hook.
Feb 18 08:55:05 np0005623263 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped dracut pre-mount hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped target Local Encrypted Volumes.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 18 08:55:05 np0005623263 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped dracut initqueue hook.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Apply Kernel Variables.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Create Volatile Files and Directories.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Coldplug All udev Devices.
Feb 18 08:55:05 np0005623263 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped dracut pre-trigger hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Setup Virtual Console.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 18 08:55:05 np0005623263 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Closed udev Control Socket.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Closed udev Kernel Socket.
Feb 18 08:55:05 np0005623263 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped dracut pre-udev hook.
Feb 18 08:55:05 np0005623263 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped dracut cmdline hook.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting Cleanup udev Database...
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 18 08:55:05 np0005623263 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Create List of Static Device Nodes.
Feb 18 08:55:05 np0005623263 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Stopped Create System Users.
Feb 18 08:55:05 np0005623263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 18 08:55:05 np0005623263 systemd[1]: Finished Cleanup udev Database.
Feb 18 08:55:05 np0005623263 systemd[1]: Reached target Switch Root.
Feb 18 08:55:05 np0005623263 systemd[1]: Starting Switch Root...
Feb 18 08:55:05 np0005623263 systemd[1]: Switching root.
Feb 18 08:55:05 np0005623263 systemd-journald[308]: Journal stopped
Feb 18 08:55:06 np0005623263 systemd-journald: Received SIGTERM from PID 1 (n/a).
Feb 18 08:55:06 np0005623263 kernel: audit: type=1404 audit(1771422906.113:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 18 08:55:06 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 08:55:06 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 08:55:06 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 08:55:06 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 08:55:06 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 08:55:06 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 08:55:06 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 08:55:06 np0005623263 kernel: audit: type=1403 audit(1771422906.226:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 18 08:55:06 np0005623263 systemd: Successfully loaded SELinux policy in 116.352ms.
Feb 18 08:55:06 np0005623263 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.108ms.
Feb 18 08:55:06 np0005623263 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 18 08:55:06 np0005623263 systemd: Detected virtualization kvm.
Feb 18 08:55:06 np0005623263 systemd: Detected architecture x86-64.
Feb 18 08:55:06 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 08:55:06 np0005623263 systemd: initrd-switch-root.service: Deactivated successfully.
Feb 18 08:55:06 np0005623263 systemd: Stopped Switch Root.
Feb 18 08:55:06 np0005623263 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 18 08:55:06 np0005623263 systemd: Created slice Slice /system/getty.
Feb 18 08:55:06 np0005623263 systemd: Created slice Slice /system/serial-getty.
Feb 18 08:55:06 np0005623263 systemd: Created slice Slice /system/sshd-keygen.
Feb 18 08:55:06 np0005623263 systemd: Created slice User and Session Slice.
Feb 18 08:55:06 np0005623263 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb 18 08:55:06 np0005623263 systemd: Started Forward Password Requests to Wall Directory Watch.
Feb 18 08:55:06 np0005623263 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 18 08:55:06 np0005623263 systemd: Reached target Local Encrypted Volumes.
Feb 18 08:55:06 np0005623263 systemd: Stopped target Switch Root.
Feb 18 08:55:06 np0005623263 systemd: Stopped target Initrd File Systems.
Feb 18 08:55:06 np0005623263 systemd: Stopped target Initrd Root File System.
Feb 18 08:55:06 np0005623263 systemd: Reached target Local Integrity Protected Volumes.
Feb 18 08:55:06 np0005623263 systemd: Reached target Path Units.
Feb 18 08:55:06 np0005623263 systemd: Reached target rpc_pipefs.target.
Feb 18 08:55:06 np0005623263 systemd: Reached target Slice Units.
Feb 18 08:55:06 np0005623263 systemd: Reached target Swaps.
Feb 18 08:55:06 np0005623263 systemd: Reached target Local Verity Protected Volumes.
Feb 18 08:55:06 np0005623263 systemd: Listening on RPCbind Server Activation Socket.
Feb 18 08:55:06 np0005623263 systemd: Reached target RPC Port Mapper.
Feb 18 08:55:06 np0005623263 systemd: Listening on Process Core Dump Socket.
Feb 18 08:55:06 np0005623263 systemd: Listening on initctl Compatibility Named Pipe.
Feb 18 08:55:06 np0005623263 systemd: Listening on udev Control Socket.
Feb 18 08:55:06 np0005623263 systemd: Listening on udev Kernel Socket.
Feb 18 08:55:06 np0005623263 systemd: Mounting Huge Pages File System...
Feb 18 08:55:06 np0005623263 systemd: Mounting POSIX Message Queue File System...
Feb 18 08:55:06 np0005623263 systemd: Mounting Kernel Debug File System...
Feb 18 08:55:06 np0005623263 systemd: Mounting Kernel Trace File System...
Feb 18 08:55:06 np0005623263 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 18 08:55:06 np0005623263 systemd: Starting Create List of Static Device Nodes...
Feb 18 08:55:06 np0005623263 systemd: Starting Load Kernel Module configfs...
Feb 18 08:55:06 np0005623263 systemd: Starting Load Kernel Module drm...
Feb 18 08:55:06 np0005623263 systemd: Starting Load Kernel Module efi_pstore...
Feb 18 08:55:06 np0005623263 systemd: Starting Load Kernel Module fuse...
Feb 18 08:55:06 np0005623263 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 18 08:55:06 np0005623263 systemd: systemd-fsck-root.service: Deactivated successfully.
Feb 18 08:55:06 np0005623263 systemd: Stopped File System Check on Root Device.
Feb 18 08:55:06 np0005623263 systemd: Stopped Journal Service.
Feb 18 08:55:06 np0005623263 systemd: Starting Journal Service...
Feb 18 08:55:06 np0005623263 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 18 08:55:06 np0005623263 systemd: Starting Generate network units from Kernel command line...
Feb 18 08:55:06 np0005623263 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 18 08:55:06 np0005623263 systemd: Starting Remount Root and Kernel File Systems...
Feb 18 08:55:06 np0005623263 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 18 08:55:06 np0005623263 systemd: Starting Apply Kernel Variables...
Feb 18 08:55:06 np0005623263 systemd: Starting Coldplug All udev Devices...
Feb 18 08:55:06 np0005623263 kernel: fuse: init (API version 7.37)
Feb 18 08:55:06 np0005623263 systemd-journald[694]: Journal started
Feb 18 08:55:06 np0005623263 systemd-journald[694]: Runtime Journal (/run/log/journal/621707288f5710e1387f73ed6d90e964) is 8.0M, max 153.6M, 145.6M free.
Feb 18 08:55:06 np0005623263 systemd[1]: Queued start job for default target Multi-User System.
Feb 18 08:55:06 np0005623263 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 18 08:55:06 np0005623263 systemd: Started Journal Service.
Feb 18 08:55:06 np0005623263 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 18 08:55:06 np0005623263 systemd[1]: Mounted Huge Pages File System.
Feb 18 08:55:06 np0005623263 systemd[1]: Mounted POSIX Message Queue File System.
Feb 18 08:55:06 np0005623263 systemd[1]: Mounted Kernel Debug File System.
Feb 18 08:55:06 np0005623263 systemd[1]: Mounted Kernel Trace File System.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Create List of Static Device Nodes.
Feb 18 08:55:06 np0005623263 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Load Kernel Module configfs.
Feb 18 08:55:06 np0005623263 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Load Kernel Module drm.
Feb 18 08:55:06 np0005623263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 18 08:55:06 np0005623263 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Load Kernel Module fuse.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Generate network units from Kernel command line.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Apply Kernel Variables.
Feb 18 08:55:06 np0005623263 systemd[1]: Mounting FUSE Control File System...
Feb 18 08:55:06 np0005623263 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 18 08:55:06 np0005623263 systemd[1]: Starting Rebuild Hardware Database...
Feb 18 08:55:06 np0005623263 systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 18 08:55:06 np0005623263 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 18 08:55:06 np0005623263 systemd[1]: Starting Load/Save OS Random Seed...
Feb 18 08:55:06 np0005623263 systemd[1]: Starting Create System Users...
Feb 18 08:55:06 np0005623263 systemd[1]: Mounted FUSE Control File System.
Feb 18 08:55:06 np0005623263 systemd-journald[694]: Runtime Journal (/run/log/journal/621707288f5710e1387f73ed6d90e964) is 8.0M, max 153.6M, 145.6M free.
Feb 18 08:55:06 np0005623263 systemd-journald[694]: Received client request to flush runtime journal.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Load/Save OS Random Seed.
Feb 18 08:55:06 np0005623263 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Create System Users.
Feb 18 08:55:06 np0005623263 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 18 08:55:06 np0005623263 systemd[1]: Finished Coldplug All udev Devices.
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target Preparation for Local File Systems.
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target Local File Systems.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 18 08:55:07 np0005623263 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 18 08:55:07 np0005623263 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 18 08:55:07 np0005623263 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Automatic Boot Loader Update...
Feb 18 08:55:07 np0005623263 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Create Volatile Files and Directories...
Feb 18 08:55:07 np0005623263 bootctl[712]: Couldn't find EFI system partition, skipping.
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Automatic Boot Loader Update.
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Create Volatile Files and Directories.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Security Auditing Service...
Feb 18 08:55:07 np0005623263 systemd[1]: Starting RPC Bind...
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Rebuild Journal Catalog...
Feb 18 08:55:07 np0005623263 auditd[718]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 18 08:55:07 np0005623263 auditd[718]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Rebuild Journal Catalog.
Feb 18 08:55:07 np0005623263 systemd[1]: Started RPC Bind.
Feb 18 08:55:07 np0005623263 augenrules[723]: /sbin/augenrules: No change
Feb 18 08:55:07 np0005623263 augenrules[738]: No rules
Feb 18 08:55:07 np0005623263 augenrules[738]: enabled 1
Feb 18 08:55:07 np0005623263 augenrules[738]: failure 1
Feb 18 08:55:07 np0005623263 augenrules[738]: pid 718
Feb 18 08:55:07 np0005623263 augenrules[738]: rate_limit 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_limit 8192
Feb 18 08:55:07 np0005623263 augenrules[738]: lost 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_wait_time 60000
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_wait_time_actual 0
Feb 18 08:55:07 np0005623263 augenrules[738]: enabled 1
Feb 18 08:55:07 np0005623263 augenrules[738]: failure 1
Feb 18 08:55:07 np0005623263 augenrules[738]: pid 718
Feb 18 08:55:07 np0005623263 augenrules[738]: rate_limit 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_limit 8192
Feb 18 08:55:07 np0005623263 augenrules[738]: lost 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_wait_time 60000
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_wait_time_actual 0
Feb 18 08:55:07 np0005623263 augenrules[738]: enabled 1
Feb 18 08:55:07 np0005623263 augenrules[738]: failure 1
Feb 18 08:55:07 np0005623263 augenrules[738]: pid 718
Feb 18 08:55:07 np0005623263 augenrules[738]: rate_limit 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_limit 8192
Feb 18 08:55:07 np0005623263 augenrules[738]: lost 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog 0
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_wait_time 60000
Feb 18 08:55:07 np0005623263 augenrules[738]: backlog_wait_time_actual 0
Feb 18 08:55:07 np0005623263 systemd[1]: Started Security Auditing Service.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Rebuild Hardware Database.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Update is Completed...
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Update is Completed.
Feb 18 08:55:07 np0005623263 systemd-udevd[746]: Using default interface naming scheme 'rhel-9.0'.
Feb 18 08:55:07 np0005623263 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Load Kernel Module configfs...
Feb 18 08:55:07 np0005623263 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Load Kernel Module configfs.
Feb 18 08:55:07 np0005623263 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 18 08:55:07 np0005623263 systemd-udevd[753]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 08:55:07 np0005623263 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 18 08:55:07 np0005623263 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 18 08:55:07 np0005623263 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 18 08:55:07 np0005623263 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 18 08:55:07 np0005623263 kernel: kvm_amd: TSC scaling supported
Feb 18 08:55:07 np0005623263 kernel: kvm_amd: Nested Virtualization enabled
Feb 18 08:55:07 np0005623263 kernel: kvm_amd: Nested Paging enabled
Feb 18 08:55:07 np0005623263 kernel: kvm_amd: LBR virtualization supported
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target System Initialization.
Feb 18 08:55:07 np0005623263 systemd[1]: Started dnf makecache --timer.
Feb 18 08:55:07 np0005623263 systemd[1]: Started Daily rotation of log files.
Feb 18 08:55:07 np0005623263 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target Timer Units.
Feb 18 08:55:07 np0005623263 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 18 08:55:07 np0005623263 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target Socket Units.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting D-Bus System Message Bus...
Feb 18 08:55:07 np0005623263 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 18 08:55:07 np0005623263 systemd[1]: Started D-Bus System Message Bus.
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target Basic System.
Feb 18 08:55:07 np0005623263 dbus-broker-lau[819]: Ready
Feb 18 08:55:07 np0005623263 systemd[1]: Starting NTP client/server...
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 18 08:55:07 np0005623263 systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 18 08:55:07 np0005623263 systemd[1]: Starting IPv4 firewall with iptables...
Feb 18 08:55:07 np0005623263 systemd[1]: Started irqbalance daemon.
Feb 18 08:55:07 np0005623263 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 18 08:55:07 np0005623263 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 18 08:55:07 np0005623263 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 18 08:55:07 np0005623263 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target sshd-keygen.target.
Feb 18 08:55:07 np0005623263 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 18 08:55:07 np0005623263 systemd[1]: Reached target User and Group Name Lookups.
Feb 18 08:55:07 np0005623263 systemd[1]: Starting User Login Management...
Feb 18 08:55:07 np0005623263 systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 18 08:55:08 np0005623263 chronyd[839]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 18 08:55:08 np0005623263 systemd-logind[831]: New seat seat0.
Feb 18 08:55:08 np0005623263 chronyd[839]: Loaded 0 symmetric keys
Feb 18 08:55:08 np0005623263 systemd-logind[831]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 18 08:55:08 np0005623263 systemd-logind[831]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 18 08:55:08 np0005623263 systemd[1]: Started User Login Management.
Feb 18 08:55:08 np0005623263 chronyd[839]: Using right/UTC timezone to obtain leap second data
Feb 18 08:55:08 np0005623263 chronyd[839]: Loaded seccomp filter (level 2)
Feb 18 08:55:08 np0005623263 systemd[1]: Started NTP client/server.
Feb 18 08:55:08 np0005623263 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 18 08:55:08 np0005623263 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 18 08:55:08 np0005623263 iptables.init[826]: iptables: Applying firewall rules: [  OK  ]
Feb 18 08:55:08 np0005623263 systemd[1]: Finished IPv4 firewall with iptables.
Feb 18 08:55:08 np0005623263 cloud-init[849]: Cloud-init v. 24.4-8.el9 running 'init-local' at Wed, 18 Feb 2026 13:55:08 +0000. Up 6.17 seconds.
Feb 18 08:55:08 np0005623263 systemd[1]: run-cloud\x2dinit-tmp-tmp3vo71i1b.mount: Deactivated successfully.
Feb 18 08:55:08 np0005623263 systemd[1]: Starting Hostname Service...
Feb 18 08:55:08 np0005623263 systemd[1]: Started Hostname Service.
Feb 18 08:55:09 np0005623263 systemd-hostnamed[863]: Hostname set to <np0005623263.novalocal> (static)
Feb 18 08:55:09 np0005623263 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 18 08:55:09 np0005623263 systemd[1]: Reached target Preparation for Network.
Feb 18 08:55:09 np0005623263 systemd[1]: Starting Network Manager...
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.1999] NetworkManager (version 1.54.3-2.el9) is starting... (boot:054a86f0-adcd-4801-9dec-c21d7e1147c9)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2004] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2125] manager[0x5577906a9000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2156] hostname: hostname: using hostnamed
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2156] hostname: static hostname changed from (none) to "np0005623263.novalocal"
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2160] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2243] manager[0x5577906a9000]: rfkill: Wi-Fi hardware radio set enabled
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2243] manager[0x5577906a9000]: rfkill: WWAN hardware radio set enabled
Feb 18 08:55:09 np0005623263 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2320] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2320] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2320] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2320] manager: Networking is enabled by state file
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2321] settings: Loaded settings plugin: keyfile (internal)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2352] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2373] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2385] dhcp: init: Using DHCP client 'internal'
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2387] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2397] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2407] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2416] device (lo): Activation: starting connection 'lo' (b53812ff-8ea4-495d-a77c-9332883d7f99)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2423] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2425] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2445] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2448] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2449] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2450] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2451] device (eth0): carrier: link connected
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2453] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2456] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2460] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2462] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2463] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2464] manager: NetworkManager state is now CONNECTING
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2465] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 08:55:09 np0005623263 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2468] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2471] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 18 08:55:09 np0005623263 systemd[1]: Started Network Manager.
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2499] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2506] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 18 08:55:09 np0005623263 systemd[1]: Reached target Network.
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2520] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 08:55:09 np0005623263 systemd[1]: Starting Network Manager Wait Online...
Feb 18 08:55:09 np0005623263 systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 18 08:55:09 np0005623263 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2649] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2653] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2654] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2659] device (lo): Activation: successful, device activated.
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2670] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2673] manager: NetworkManager state is now CONNECTED_SITE
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2675] device (eth0): Activation: successful, device activated.
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2680] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 18 08:55:09 np0005623263 NetworkManager[868]: <info>  [1771422909.2682] manager: startup complete
Feb 18 08:55:09 np0005623263 systemd[1]: Started GSSAPI Proxy Daemon.
Feb 18 08:55:09 np0005623263 systemd[1]: Finished Network Manager Wait Online.
Feb 18 08:55:09 np0005623263 systemd[1]: Starting Cloud-init: Network Stage...
Feb 18 08:55:09 np0005623263 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 18 08:55:09 np0005623263 systemd[1]: Reached target NFS client services.
Feb 18 08:55:09 np0005623263 systemd[1]: Reached target Preparation for Remote File Systems.
Feb 18 08:55:09 np0005623263 systemd[1]: Reached target Remote File Systems.
Feb 18 08:55:09 np0005623263 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 18 08:55:09 np0005623263 cloud-init[932]: Cloud-init v. 24.4-8.el9 running 'init' at Wed, 18 Feb 2026 13:55:09 +0000. Up 7.06 seconds.
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |  eth0  | True |         38.102.83.12        | 255.255.255.0 | global | fa:16:3e:82:0b:94 |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |  eth0  | True | fe80::f816:3eff:fe82:b94/64 |       .       |  link  | fa:16:3e:82:0b:94 |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb 18 08:55:09 np0005623263 cloud-init[932]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 18 08:55:10 np0005623263 cloud-init[932]: Generating public/private rsa key pair.
Feb 18 08:55:10 np0005623263 cloud-init[932]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 18 08:55:10 np0005623263 cloud-init[932]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 18 08:55:10 np0005623263 cloud-init[932]: The key fingerprint is:
Feb 18 08:55:10 np0005623263 cloud-init[932]: SHA256:VeXwMBbwuZIiRMkBkQWApX1o9Y3ET3hjuBVWj+LMED0 root@np0005623263.novalocal
Feb 18 08:55:10 np0005623263 cloud-init[932]: The key's randomart image is:
Feb 18 08:55:10 np0005623263 cloud-init[932]: +---[RSA 3072]----+
Feb 18 08:55:10 np0005623263 cloud-init[932]: | oo.=O**ooo.Bo.  |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |.o o.+*=E  * B   |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |. + . =Booo + o  |
Feb 18 08:55:10 np0005623263 cloud-init[932]: | . . ..=.o . .   |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |      . S o .    |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |       . . .     |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |                 |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |                 |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |                 |
Feb 18 08:55:10 np0005623263 cloud-init[932]: +----[SHA256]-----+
Feb 18 08:55:10 np0005623263 cloud-init[932]: Generating public/private ecdsa key pair.
Feb 18 08:55:10 np0005623263 cloud-init[932]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 18 08:55:10 np0005623263 cloud-init[932]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 18 08:55:10 np0005623263 cloud-init[932]: The key fingerprint is:
Feb 18 08:55:10 np0005623263 cloud-init[932]: SHA256:7mDqopQa0+zi5kHBxHgHnTxr29wW936OWTAMmUG34kY root@np0005623263.novalocal
Feb 18 08:55:10 np0005623263 cloud-init[932]: The key's randomart image is:
Feb 18 08:55:10 np0005623263 cloud-init[932]: +---[ECDSA 256]---+
Feb 18 08:55:10 np0005623263 cloud-init[932]: |o..+ .   .o .    |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |+.. *      = .   |
Feb 18 08:55:10 np0005623263 cloud-init[932]: | + . o    E .    |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |  . o   .o.+     |
Feb 18 08:55:10 np0005623263 cloud-init[932]: | . . + .Soo.+    |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |.o. . o.o.  .o   |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |o+o   o..  .  .  |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |+=o  o o    .+.  |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |*=.oo   .   oo.  |
Feb 18 08:55:10 np0005623263 cloud-init[932]: +----[SHA256]-----+
Feb 18 08:55:10 np0005623263 cloud-init[932]: Generating public/private ed25519 key pair.
Feb 18 08:55:10 np0005623263 cloud-init[932]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 18 08:55:10 np0005623263 cloud-init[932]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 18 08:55:10 np0005623263 cloud-init[932]: The key fingerprint is:
Feb 18 08:55:10 np0005623263 cloud-init[932]: SHA256:OJ+tMYf451jM72TmQ6S/O/AehzrJrSHjzZSzthEJqSc root@np0005623263.novalocal
Feb 18 08:55:10 np0005623263 cloud-init[932]: The key's randomart image is:
Feb 18 08:55:10 np0005623263 cloud-init[932]: +--[ED25519 256]--+
Feb 18 08:55:10 np0005623263 cloud-init[932]: |                 |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |         .       |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |        o        |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |       o . ..    |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |      E S oo     |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |       * *oo..   |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |      . O.%B* .  |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |       o &*%*o   |
Feb 18 08:55:10 np0005623263 cloud-init[932]: |        =+O*B=   |
Feb 18 08:55:10 np0005623263 cloud-init[932]: +----[SHA256]-----+
Feb 18 08:55:10 np0005623263 systemd[1]: Finished Cloud-init: Network Stage.
Feb 18 08:55:10 np0005623263 systemd[1]: Reached target Cloud-config availability.
Feb 18 08:55:10 np0005623263 systemd[1]: Reached target Network is Online.
Feb 18 08:55:10 np0005623263 systemd[1]: Starting Cloud-init: Config Stage...
Feb 18 08:55:10 np0005623263 systemd[1]: Starting Crash recovery kernel arming...
Feb 18 08:55:10 np0005623263 systemd[1]: Starting Notify NFS peers of a restart...
Feb 18 08:55:10 np0005623263 systemd[1]: Starting System Logging Service...
Feb 18 08:55:10 np0005623263 sm-notify[1014]: Version 2.5.4 starting
Feb 18 08:55:10 np0005623263 systemd[1]: Starting OpenSSH server daemon...
Feb 18 08:55:10 np0005623263 systemd[1]: Starting Permit User Sessions...
Feb 18 08:55:10 np0005623263 systemd[1]: Started Notify NFS peers of a restart.
Feb 18 08:55:10 np0005623263 systemd[1]: Started OpenSSH server daemon.
Feb 18 08:55:10 np0005623263 systemd[1]: Finished Permit User Sessions.
Feb 18 08:55:10 np0005623263 systemd[1]: Started Command Scheduler.
Feb 18 08:55:10 np0005623263 systemd[1]: Started Getty on tty1.
Feb 18 08:55:10 np0005623263 systemd[1]: Started Serial Getty on ttyS0.
Feb 18 08:55:10 np0005623263 systemd[1]: Reached target Login Prompts.
Feb 18 08:55:10 np0005623263 rsyslogd[1015]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1015" x-info="https://www.rsyslog.com"] start
Feb 18 08:55:10 np0005623263 rsyslogd[1015]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 18 08:55:10 np0005623263 systemd[1]: Started System Logging Service.
Feb 18 08:55:10 np0005623263 systemd[1]: Reached target Multi-User System.
Feb 18 08:55:10 np0005623263 systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 18 08:55:10 np0005623263 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 18 08:55:10 np0005623263 systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 18 08:55:10 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 08:55:11 np0005623263 kdumpctl[1026]: kdump: No kdump initial ramdisk found.
Feb 18 08:55:11 np0005623263 kdumpctl[1026]: kdump: Rebuilding /boot/initramfs-5.14.0-681.el9.x86_64kdump.img
Feb 18 08:55:11 np0005623263 cloud-init[1183]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Wed, 18 Feb 2026 13:55:11 +0000. Up 8.56 seconds.
Feb 18 08:55:11 np0005623263 systemd[1]: Finished Cloud-init: Config Stage.
Feb 18 08:55:11 np0005623263 systemd[1]: Starting Cloud-init: Final Stage...
Feb 18 08:55:11 np0005623263 cloud-init[1473]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Wed, 18 Feb 2026 13:55:11 +0000. Up 8.99 seconds.
Feb 18 08:55:11 np0005623263 cloud-init[1517]: #############################################################
Feb 18 08:55:11 np0005623263 cloud-init[1521]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 18 08:55:11 np0005623263 cloud-init[1526]: 256 SHA256:7mDqopQa0+zi5kHBxHgHnTxr29wW936OWTAMmUG34kY root@np0005623263.novalocal (ECDSA)
Feb 18 08:55:11 np0005623263 cloud-init[1530]: 256 SHA256:OJ+tMYf451jM72TmQ6S/O/AehzrJrSHjzZSzthEJqSc root@np0005623263.novalocal (ED25519)
Feb 18 08:55:11 np0005623263 cloud-init[1532]: 3072 SHA256:VeXwMBbwuZIiRMkBkQWApX1o9Y3ET3hjuBVWj+LMED0 root@np0005623263.novalocal (RSA)
Feb 18 08:55:11 np0005623263 cloud-init[1533]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 18 08:55:11 np0005623263 cloud-init[1536]: #############################################################
Feb 18 08:55:11 np0005623263 dracut[1535]: dracut-057-110.git20260130.el9
Feb 18 08:55:11 np0005623263 cloud-init[1473]: Cloud-init v. 24.4-8.el9 finished at Wed, 18 Feb 2026 13:55:11 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.15 seconds
Feb 18 08:55:11 np0005623263 systemd[1]: Finished Cloud-init: Final Stage.
Feb 18 08:55:11 np0005623263 systemd[1]: Reached target Cloud-init target.
Feb 18 08:55:11 np0005623263 dracut[1541]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/9d578f93-c4e9-4172-8459-ef150e54751c /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-681.el9.x86_64kdump.img 5.14.0-681.el9.x86_64
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: memstrack is not available
Feb 18 08:55:12 np0005623263 dracut[1541]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 18 08:55:12 np0005623263 dracut[1541]: memstrack is not available
Feb 18 08:55:12 np0005623263 dracut[1541]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 18 08:55:13 np0005623263 dracut[1541]: *** Including module: systemd ***
Feb 18 08:55:13 np0005623263 dracut[1541]: *** Including module: fips ***
Feb 18 08:55:13 np0005623263 dracut[1541]: *** Including module: systemd-initrd ***
Feb 18 08:55:13 np0005623263 dracut[1541]: *** Including module: i18n ***
Feb 18 08:55:13 np0005623263 dracut[1541]: *** Including module: drm ***
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: prefixdevname ***
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: kernel-modules ***
Feb 18 08:55:14 np0005623263 chronyd[839]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Feb 18 08:55:14 np0005623263 chronyd[839]: System clock TAI offset set to 37 seconds
Feb 18 08:55:14 np0005623263 kernel: block vda: the capability attribute has been deprecated.
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: kernel-modules-extra ***
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: qemu ***
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: fstab-sys ***
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: rootfs-block ***
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: terminfo ***
Feb 18 08:55:14 np0005623263 dracut[1541]: *** Including module: udev-rules ***
Feb 18 08:55:15 np0005623263 dracut[1541]: Skipping udev rule: 91-permissions.rules
Feb 18 08:55:15 np0005623263 dracut[1541]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 18 08:55:15 np0005623263 dracut[1541]: *** Including module: virtiofs ***
Feb 18 08:55:15 np0005623263 dracut[1541]: *** Including module: dracut-systemd ***
Feb 18 08:55:15 np0005623263 dracut[1541]: *** Including module: usrmount ***
Feb 18 08:55:15 np0005623263 dracut[1541]: *** Including module: base ***
Feb 18 08:55:15 np0005623263 dracut[1541]: *** Including module: fs-lib ***
Feb 18 08:55:15 np0005623263 dracut[1541]: *** Including module: kdumpbase ***
Feb 18 08:55:16 np0005623263 dracut[1541]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 18 08:55:16 np0005623263 dracut[1541]:  microcode_ctl module: mangling fw_dir
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 18 08:55:16 np0005623263 dracut[1541]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 18 08:55:16 np0005623263 dracut[1541]: *** Including module: openssl ***
Feb 18 08:55:16 np0005623263 dracut[1541]: *** Including module: shutdown ***
Feb 18 08:55:16 np0005623263 dracut[1541]: *** Including module: squash ***
Feb 18 08:55:16 np0005623263 dracut[1541]: *** Including modules done ***
Feb 18 08:55:16 np0005623263 dracut[1541]: *** Installing kernel module dependencies ***
Feb 18 08:55:17 np0005623263 dracut[1541]: *** Installing kernel module dependencies done ***
Feb 18 08:55:17 np0005623263 dracut[1541]: *** Resolving executable dependencies ***
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 35 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 35 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 33 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 33 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 31 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 28 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 34 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 34 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 32 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 30 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 30 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 irqbalance[827]: Cannot change IRQ 29 affinity: Operation not permitted
Feb 18 08:55:18 np0005623263 irqbalance[827]: IRQ 29 affinity is now unmanaged
Feb 18 08:55:18 np0005623263 dracut[1541]: *** Resolving executable dependencies done ***
Feb 18 08:55:18 np0005623263 dracut[1541]: *** Generating early-microcode cpio image ***
Feb 18 08:55:18 np0005623263 dracut[1541]: *** Store current command line parameters ***
Feb 18 08:55:18 np0005623263 dracut[1541]: Stored kernel commandline:
Feb 18 08:55:18 np0005623263 dracut[1541]: No dracut internal kernel commandline stored in the initramfs
Feb 18 08:55:19 np0005623263 dracut[1541]: *** Install squash loader ***
Feb 18 08:55:19 np0005623263 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 18 08:55:19 np0005623263 dracut[1541]: *** Squashing the files inside the initramfs ***
Feb 18 08:55:21 np0005623263 dracut[1541]: *** Squashing the files inside the initramfs done ***
Feb 18 08:55:21 np0005623263 dracut[1541]: *** Creating image file '/boot/initramfs-5.14.0-681.el9.x86_64kdump.img' ***
Feb 18 08:55:21 np0005623263 dracut[1541]: *** Hardlinking files ***
Feb 18 08:55:21 np0005623263 dracut[1541]: *** Hardlinking files done ***
Feb 18 08:55:21 np0005623263 dracut[1541]: *** Creating initramfs image file '/boot/initramfs-5.14.0-681.el9.x86_64kdump.img' done ***
Feb 18 08:55:22 np0005623263 kdumpctl[1026]: kdump: kexec: loaded kdump kernel
Feb 18 08:55:22 np0005623263 kdumpctl[1026]: kdump: Starting kdump: [OK]
Feb 18 08:55:22 np0005623263 systemd[1]: Finished Crash recovery kernel arming.
Feb 18 08:55:22 np0005623263 systemd[1]: Startup finished in 1.356s (kernel) + 2.253s (initrd) + 15.959s (userspace) = 19.569s.
Feb 18 08:55:27 np0005623263 systemd-logind[831]: New session 1 of user zuul.
Feb 18 08:55:27 np0005623263 systemd[1]: Created slice User Slice of UID 1000.
Feb 18 08:55:27 np0005623263 systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 18 08:55:27 np0005623263 systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 18 08:55:27 np0005623263 systemd[1]: Starting User Manager for UID 1000...
Feb 18 08:55:27 np0005623263 systemd[4802]: Queued start job for default target Main User Target.
Feb 18 08:55:27 np0005623263 systemd[4802]: Created slice User Application Slice.
Feb 18 08:55:27 np0005623263 systemd[4802]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 18 08:55:27 np0005623263 systemd[4802]: Started Daily Cleanup of User's Temporary Directories.
Feb 18 08:55:27 np0005623263 systemd[4802]: Reached target Paths.
Feb 18 08:55:27 np0005623263 systemd[4802]: Reached target Timers.
Feb 18 08:55:27 np0005623263 systemd[4802]: Starting D-Bus User Message Bus Socket...
Feb 18 08:55:27 np0005623263 systemd[4802]: Starting Create User's Volatile Files and Directories...
Feb 18 08:55:28 np0005623263 systemd[4802]: Finished Create User's Volatile Files and Directories.
Feb 18 08:55:28 np0005623263 systemd[4802]: Listening on D-Bus User Message Bus Socket.
Feb 18 08:55:28 np0005623263 systemd[4802]: Reached target Sockets.
Feb 18 08:55:28 np0005623263 systemd[4802]: Reached target Basic System.
Feb 18 08:55:28 np0005623263 systemd[4802]: Reached target Main User Target.
Feb 18 08:55:28 np0005623263 systemd[4802]: Startup finished in 117ms.
Feb 18 08:55:28 np0005623263 systemd[1]: Started User Manager for UID 1000.
Feb 18 08:55:28 np0005623263 systemd[1]: Started Session 1 of User zuul.
Feb 18 08:55:28 np0005623263 python3[4885]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 08:55:31 np0005623263 python3[4913]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 08:55:37 np0005623263 python3[4971]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 08:55:38 np0005623263 python3[5011]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 18 08:55:39 np0005623263 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 18 08:55:40 np0005623263 python3[5039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo7Q4FZOuEzJlH3KB1E9D97vDVTl9gkxOcp8ShNKbSCEZYBj+r5WH2hMo9p5jtfcbMrE3HsBJehDLsWWn8MDVtBIO7C1sgA4x9vQUvzlJ4nvGH5qzsMSICG5AAJogj/Vr4WkU6OqcudZ9CNI/tIiV/bLAMN1CxA2xNqX8CaMYapNQP8EDyQvlmNvWLKmsWPElWrZCMVtrkz0ueJywL1aUX6lW8UeALUgXX4AcRu91jUcmgTB2IVynFfWQeLZ7SmN9xibibXr9GW/7eSDNMRSrI7D6KgQev50jlwymxHdDmBUMqXMlXON5b7H+P2NAZbL0yxtC8zkukZJLTunUwXaoayqVLOEaN8SkC4h2xgFAcobeG0COQuw5KFNG4KzZ6ihSx8rBClzZoDFv32SEn3zrZqtgGh+JPnQDF9X96h9homCRnRHg++HMc5v3S/chNCiQG7MX1lD77RJW5+Mvt2uHTRECoIU2QvsV8RYVuxohGVdi9UCOIkHZu6952LSFgV+U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:40 np0005623263 python3[5063]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:41 np0005623263 python3[5162]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:55:41 np0005623263 python3[5233]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771422940.8032908-207-38837953307674/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a7ce07c0829e460699de7918e67f6970_id_rsa follow=False checksum=b8872c4fda6cfe8947b29eaa6cb34932fae3fec5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:42 np0005623263 python3[5356]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:55:42 np0005623263 python3[5427]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771422941.7541926-240-111022362582549/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a7ce07c0829e460699de7918e67f6970_id_rsa.pub follow=False checksum=3d8c5c32ca91926d57f0580d38431bfa034dd934 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:43 np0005623263 python3[5475]: ansible-ping Invoked with data=pong
Feb 18 08:55:44 np0005623263 python3[5499]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 08:55:46 np0005623263 python3[5557]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 18 08:55:47 np0005623263 python3[5589]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:47 np0005623263 python3[5613]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:48 np0005623263 python3[5637]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:48 np0005623263 python3[5661]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:48 np0005623263 python3[5685]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:48 np0005623263 python3[5709]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:50 np0005623263 python3[5735]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:51 np0005623263 python3[5813]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:55:51 np0005623263 python3[5886]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771422950.6905565-21-106704718105438/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:55:52 np0005623263 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:52 np0005623263 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:52 np0005623263 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:53 np0005623263 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:53 np0005623263 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:53 np0005623263 python3[6054]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:53 np0005623263 python3[6078]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:54 np0005623263 python3[6102]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:54 np0005623263 python3[6126]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:54 np0005623263 python3[6150]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:54 np0005623263 python3[6174]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:55 np0005623263 python3[6198]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:55 np0005623263 python3[6222]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:55 np0005623263 python3[6246]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:55 np0005623263 python3[6270]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:56 np0005623263 python3[6294]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:56 np0005623263 python3[6318]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:56 np0005623263 python3[6342]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:56 np0005623263 python3[6366]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:57 np0005623263 python3[6390]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:57 np0005623263 python3[6414]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:57 np0005623263 python3[6438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:58 np0005623263 python3[6462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:58 np0005623263 python3[6486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:58 np0005623263 python3[6510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:55:58 np0005623263 python3[6534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 08:56:02 np0005623263 python3[6560]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 18 08:56:02 np0005623263 systemd[1]: Starting Time & Date Service...
Feb 18 08:56:02 np0005623263 systemd[1]: Started Time & Date Service.
Feb 18 08:56:02 np0005623263 systemd-timedated[6562]: Changed time zone to 'UTC' (UTC).
Feb 18 08:56:02 np0005623263 python3[6591]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:56:03 np0005623263 python3[6667]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:56:03 np0005623263 python3[6738]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1771422962.823135-153-229143324508875/source _original_basename=tmps_etp4od follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:56:03 np0005623263 python3[6838]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:56:04 np0005623263 python3[6909]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771422963.7132955-183-253292410524119/source _original_basename=tmp9oopmzn0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:56:05 np0005623263 python3[7011]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:56:05 np0005623263 python3[7084]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771422964.7824495-231-204856703423034/source _original_basename=tmp2okdztkk follow=False checksum=bee07c3642df91d0fc882b8d0517473ba529622f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:56:06 np0005623263 python3[7132]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 08:56:06 np0005623263 python3[7158]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 08:56:06 np0005623263 python3[7238]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:56:07 np0005623263 python3[7311]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1771422966.581557-273-125198552535490/source _original_basename=tmptopmu5oy follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:56:07 np0005623263 python3[7362]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-8d16-d2f5-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 08:56:08 np0005623263 python3[7390]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-8d16-d2f5-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 18 08:56:09 np0005623263 python3[7418]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:56:26 np0005623263 python3[7444]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:56:32 np0005623263 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 18 08:57:02 np0005623263 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 18 08:57:02 np0005623263 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9531] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 18 08:57:02 np0005623263 systemd-udevd[7447]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9736] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9761] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9764] device (eth1): carrier: link connected
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9765] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9770] policy: auto-activating connection 'Wired connection 1' (4601d78c-d28d-3097-8b13-cc6f0dbfbab6)
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9774] device (eth1): Activation: starting connection 'Wired connection 1' (4601d78c-d28d-3097-8b13-cc6f0dbfbab6)
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9775] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9778] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9782] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 08:57:02 np0005623263 NetworkManager[868]: <info>  [1771423022.9785] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 18 08:57:03 np0005623263 python3[7474]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b17e-4411-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 08:57:10 np0005623263 python3[7554]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:57:10 np0005623263 python3[7627]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771423030.3104074-102-139395526252188/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=3f5fb7fbc57659c401a92ef3812579680279cbb5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:57:11 np0005623263 python3[7677]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 08:57:11 np0005623263 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 18 08:57:11 np0005623263 systemd[1]: Stopped Network Manager Wait Online.
Feb 18 08:57:11 np0005623263 systemd[1]: Stopping Network Manager Wait Online...
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8700] caught SIGTERM, shutting down normally.
Feb 18 08:57:11 np0005623263 systemd[1]: Stopping Network Manager...
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8708] dhcp4 (eth0): canceled DHCP transaction
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8708] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8708] dhcp4 (eth0): state changed no lease
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8711] manager: NetworkManager state is now CONNECTING
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8770] dhcp4 (eth1): canceled DHCP transaction
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8770] dhcp4 (eth1): state changed no lease
Feb 18 08:57:11 np0005623263 NetworkManager[868]: <info>  [1771423031.8812] exiting (success)
Feb 18 08:57:11 np0005623263 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 18 08:57:11 np0005623263 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 18 08:57:11 np0005623263 systemd[1]: Stopped Network Manager.
Feb 18 08:57:11 np0005623263 systemd[1]: Starting Network Manager...
Feb 18 08:57:11 np0005623263 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 18 08:57:11 np0005623263 NetworkManager[7681]: <info>  [1771423031.9371] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:054a86f0-adcd-4801-9dec-c21d7e1147c9)
Feb 18 08:57:11 np0005623263 NetworkManager[7681]: <info>  [1771423031.9373] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 18 08:57:11 np0005623263 NetworkManager[7681]: <info>  [1771423031.9448] manager[0x55e5af60c000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 18 08:57:11 np0005623263 systemd[1]: Starting Hostname Service...
Feb 18 08:57:12 np0005623263 systemd[1]: Started Hostname Service.
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0187] hostname: hostname: using hostnamed
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0188] hostname: static hostname changed from (none) to "np0005623263.novalocal"
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0195] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0201] manager[0x55e5af60c000]: rfkill: Wi-Fi hardware radio set enabled
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0202] manager[0x55e5af60c000]: rfkill: WWAN hardware radio set enabled
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0242] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0243] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0243] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0244] manager: Networking is enabled by state file
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0247] settings: Loaded settings plugin: keyfile (internal)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0252] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0294] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0306] dhcp: init: Using DHCP client 'internal'
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0310] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0317] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0325] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0337] device (lo): Activation: starting connection 'lo' (b53812ff-8ea4-495d-a77c-9332883d7f99)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0346] device (eth0): carrier: link connected
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0353] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0363] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0364] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0376] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0391] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0401] device (eth1): carrier: link connected
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0408] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0417] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4601d78c-d28d-3097-8b13-cc6f0dbfbab6) (indicated)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0419] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0428] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0443] device (eth1): Activation: starting connection 'Wired connection 1' (4601d78c-d28d-3097-8b13-cc6f0dbfbab6)
Feb 18 08:57:12 np0005623263 systemd[1]: Started Network Manager.
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0453] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0462] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0467] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0470] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0474] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0481] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0486] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0493] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0504] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0516] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0521] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0533] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0537] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0558] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0565] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0574] device (lo): Activation: successful, device activated.
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0587] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0600] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 18 08:57:12 np0005623263 systemd[1]: Starting Network Manager Wait Online...
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0677] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0716] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0717] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0721] manager: NetworkManager state is now CONNECTED_SITE
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0724] device (eth0): Activation: successful, device activated.
Feb 18 08:57:12 np0005623263 NetworkManager[7681]: <info>  [1771423032.0731] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 18 08:57:12 np0005623263 python3[7761]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b17e-4411-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 08:57:22 np0005623263 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 18 08:57:35 np0005623263 systemd[4802]: Starting Mark boot as successful...
Feb 18 08:57:35 np0005623263 systemd[4802]: Finished Mark boot as successful.
Feb 18 08:57:42 np0005623263 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.4901] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 18 08:57:57 np0005623263 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 18 08:57:57 np0005623263 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5195] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5199] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5206] device (eth1): Activation: successful, device activated.
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5213] manager: startup complete
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5214] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <warn>  [1771423077.5220] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5228] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 18 08:57:57 np0005623263 systemd[1]: Finished Network Manager Wait Online.
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5303] dhcp4 (eth1): canceled DHCP transaction
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5303] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5303] dhcp4 (eth1): state changed no lease
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5313] policy: auto-activating connection 'ci-private-network' (dc274909-fe3e-5763-a5bf-c5286e2af0ce)
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5318] device (eth1): Activation: starting connection 'ci-private-network' (dc274909-fe3e-5763-a5bf-c5286e2af0ce)
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5318] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5320] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5329] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5335] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5367] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5368] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 08:57:57 np0005623263 NetworkManager[7681]: <info>  [1771423077.5374] device (eth1): Activation: successful, device activated.
Feb 18 08:58:07 np0005623263 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 18 08:58:11 np0005623263 python3[7868]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 08:58:11 np0005623263 python3[7941]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771423091.343602-259-110900419954406/source _original_basename=tmp05jiw3qj follow=False checksum=bef5c1e446d9e7f0c11306274a478da18eeae3c8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 08:59:12 np0005623263 systemd-logind[831]: Session 1 logged out. Waiting for processes to exit.
Feb 18 09:00:35 np0005623263 systemd[4802]: Created slice User Background Tasks Slice.
Feb 18 09:00:35 np0005623263 systemd[4802]: Starting Cleanup of User's Temporary Files and Directories...
Feb 18 09:00:35 np0005623263 systemd[4802]: Finished Cleanup of User's Temporary Files and Directories.
Feb 18 09:04:25 np0005623263 systemd-logind[831]: New session 3 of user zuul.
Feb 18 09:04:25 np0005623263 systemd[1]: Started Session 3 of User zuul.
Feb 18 09:04:25 np0005623263 python3[8024]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1e55-c45f-00000000217f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:04:25 np0005623263 python3[8053]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:04:26 np0005623263 python3[8079]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:04:26 np0005623263 python3[8105]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:04:26 np0005623263 python3[8131]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:04:27 np0005623263 python3[8157]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:04:27 np0005623263 python3[8235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:04:28 np0005623263 python3[8308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771423467.4721217-514-188473543079068/source _original_basename=tmp2yz0lu44 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:04:28 np0005623263 python3[8358]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:04:28 np0005623263 systemd[1]: Reloading.
Feb 18 09:04:29 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:04:30 np0005623263 python3[8422]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 18 09:04:31 np0005623263 python3[8448]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:04:31 np0005623263 python3[8476]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:04:31 np0005623263 python3[8504]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:04:31 np0005623263 python3[8532]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:04:32 np0005623263 python3[8559]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1e55-c45f-000000002186-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:04:32 np0005623263 python3[8589]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 18 09:04:34 np0005623263 systemd[1]: session-3.scope: Deactivated successfully.
Feb 18 09:04:34 np0005623263 systemd[1]: session-3.scope: Consumed 3.985s CPU time.
Feb 18 09:04:34 np0005623263 systemd-logind[831]: Session 3 logged out. Waiting for processes to exit.
Feb 18 09:04:34 np0005623263 systemd-logind[831]: Removed session 3.
Feb 18 09:04:36 np0005623263 systemd-logind[831]: New session 4 of user zuul.
Feb 18 09:04:36 np0005623263 systemd[1]: Started Session 4 of User zuul.
Feb 18 09:04:36 np0005623263 python3[8622]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 18 09:04:44 np0005623263 setsebool[8664]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 18 09:04:44 np0005623263 setsebool[8664]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 18 09:04:55 np0005623263 kernel: SELinux:  Converting 386 SID table entries...
Feb 18 09:04:55 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:04:55 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:04:55 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:04:55 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:04:55 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:04:55 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:04:55 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:04:59 np0005623263 chronyd[839]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Feb 18 09:05:05 np0005623263 kernel: SELinux:  Converting 389 SID table entries...
Feb 18 09:05:05 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:05:05 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:05:05 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:05:05 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:05:05 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:05:05 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:05:05 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:05:22 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 18 09:05:23 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:05:23 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:05:23 np0005623263 systemd[1]: Reloading.
Feb 18 09:05:23 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:05:23 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:05:25 np0005623263 python3[11270]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f69b-88f9-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:05:26 np0005623263 kernel: evm: overlay not supported
Feb 18 09:05:26 np0005623263 systemd[4802]: Starting D-Bus User Message Bus...
Feb 18 09:05:26 np0005623263 dbus-broker-launch[12620]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 18 09:05:26 np0005623263 dbus-broker-launch[12620]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 18 09:05:26 np0005623263 systemd[4802]: Started D-Bus User Message Bus.
Feb 18 09:05:26 np0005623263 dbus-broker-lau[12620]: Ready
Feb 18 09:05:26 np0005623263 systemd[4802]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 18 09:05:26 np0005623263 systemd[4802]: Created slice Slice /user.
Feb 18 09:05:26 np0005623263 systemd[4802]: podman-12493.scope: unit configures an IP firewall, but not running as root.
Feb 18 09:05:26 np0005623263 systemd[4802]: (This warning is only shown for the first unit using IP firewalling.)
Feb 18 09:05:26 np0005623263 systemd[4802]: Started podman-12493.scope.
Feb 18 09:05:26 np0005623263 systemd[4802]: Started podman-pause-4a65e834.scope.
Feb 18 09:05:26 np0005623263 python3[13200]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.147:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.147:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:05:26 np0005623263 python3[13200]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 18 09:05:27 np0005623263 systemd[1]: session-4.scope: Deactivated successfully.
Feb 18 09:05:27 np0005623263 systemd[1]: session-4.scope: Consumed 41.853s CPU time.
Feb 18 09:05:27 np0005623263 systemd-logind[831]: Session 4 logged out. Waiting for processes to exit.
Feb 18 09:05:27 np0005623263 systemd-logind[831]: Removed session 4.
Feb 18 09:05:50 np0005623263 systemd-logind[831]: New session 5 of user zuul.
Feb 18 09:05:50 np0005623263 systemd[1]: Started Session 5 of User zuul.
Feb 18 09:05:50 np0005623263 python3[25294]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCu31L0d89OThB7WNk9rLxsZTEzx6S8+lYNzyL1EfUy2sylFc2Dntsx4AVUVrGXx4Aosx1d8sGRx5ch/KQKFTBA= zuul@np0005623262.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 09:05:50 np0005623263 python3[25553]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCu31L0d89OThB7WNk9rLxsZTEzx6S8+lYNzyL1EfUy2sylFc2Dntsx4AVUVrGXx4Aosx1d8sGRx5ch/KQKFTBA= zuul@np0005623262.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 09:05:51 np0005623263 python3[26061]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005623263.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 18 09:05:51 np0005623263 python3[26332]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCu31L0d89OThB7WNk9rLxsZTEzx6S8+lYNzyL1EfUy2sylFc2Dntsx4AVUVrGXx4Aosx1d8sGRx5ch/KQKFTBA= zuul@np0005623262.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 18 09:05:52 np0005623263 python3[26586]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:05:52 np0005623263 python3[26903]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771423552.0420249-135-251344342854928/source _original_basename=tmpwk_lnjcd follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:05:53 np0005623263 python3[27269]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 18 09:05:53 np0005623263 systemd[1]: Starting Hostname Service...
Feb 18 09:05:53 np0005623263 systemd[1]: Started Hostname Service.
Feb 18 09:05:53 np0005623263 systemd-hostnamed[27422]: Changed pretty hostname to 'compute-0'
Feb 18 09:05:53 np0005623263 systemd-hostnamed[27422]: Hostname set to <compute-0> (static)
Feb 18 09:05:53 np0005623263 NetworkManager[7681]: <info>  [1771423553.7058] hostname: static hostname changed from "np0005623263.novalocal" to "compute-0"
Feb 18 09:05:53 np0005623263 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 18 09:05:53 np0005623263 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 18 09:05:53 np0005623263 systemd[1]: session-5.scope: Deactivated successfully.
Feb 18 09:05:53 np0005623263 systemd[1]: session-5.scope: Consumed 2.085s CPU time.
Feb 18 09:05:53 np0005623263 systemd-logind[831]: Session 5 logged out. Waiting for processes to exit.
Feb 18 09:05:53 np0005623263 systemd-logind[831]: Removed session 5.
Feb 18 09:06:00 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:06:00 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:06:00 np0005623263 systemd[1]: man-db-cache-update.service: Consumed 42.099s CPU time.
Feb 18 09:06:00 np0005623263 systemd[1]: run-ra10881fc75294dd09d611302218f91bd.service: Deactivated successfully.
Feb 18 09:06:03 np0005623263 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 18 09:06:23 np0005623263 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 18 09:10:24 np0005623263 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 18 09:10:24 np0005623263 systemd-logind[831]: New session 6 of user zuul.
Feb 18 09:10:24 np0005623263 systemd[1]: Started Session 6 of User zuul.
Feb 18 09:10:24 np0005623263 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 18 09:10:24 np0005623263 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 18 09:10:24 np0005623263 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 18 09:10:24 np0005623263 python3[30736]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:10:25 np0005623263 python3[30852]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:10:26 np0005623263 python3[30925]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771423825.5898707-34394-24805019459794/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:10:26 np0005623263 python3[30951]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:10:26 np0005623263 python3[31024]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771423825.5898707-34394-24805019459794/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:10:27 np0005623263 python3[31050]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:10:27 np0005623263 python3[31123]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771423825.5898707-34394-24805019459794/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:10:27 np0005623263 python3[31149]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:10:28 np0005623263 python3[31222]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771423825.5898707-34394-24805019459794/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:10:28 np0005623263 python3[31248]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:10:28 np0005623263 python3[31321]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771423825.5898707-34394-24805019459794/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:10:28 np0005623263 python3[31347]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:10:29 np0005623263 python3[31420]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771423825.5898707-34394-24805019459794/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:10:29 np0005623263 python3[31446]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 18 09:10:29 np0005623263 python3[31519]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1771423825.5898707-34394-24805019459794/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:13:07 np0005623263 python3[31596]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:18:07 np0005623263 systemd[1]: session-6.scope: Deactivated successfully.
Feb 18 09:18:07 np0005623263 systemd[1]: session-6.scope: Consumed 4.621s CPU time.
Feb 18 09:18:07 np0005623263 systemd-logind[831]: Session 6 logged out. Waiting for processes to exit.
Feb 18 09:18:07 np0005623263 systemd-logind[831]: Removed session 6.
Feb 18 09:26:51 np0005623263 systemd-logind[831]: New session 7 of user zuul.
Feb 18 09:26:51 np0005623263 systemd[1]: Started Session 7 of User zuul.
Feb 18 09:26:52 np0005623263 python3.9[31802]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:26:54 np0005623263 python3.9[31984]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:27:02 np0005623263 systemd[1]: session-7.scope: Deactivated successfully.
Feb 18 09:27:02 np0005623263 systemd[1]: session-7.scope: Consumed 7.220s CPU time.
Feb 18 09:27:02 np0005623263 systemd-logind[831]: Session 7 logged out. Waiting for processes to exit.
Feb 18 09:27:02 np0005623263 systemd-logind[831]: Removed session 7.
Feb 18 09:27:09 np0005623263 systemd-logind[831]: New session 8 of user zuul.
Feb 18 09:27:09 np0005623263 systemd[1]: Started Session 8 of User zuul.
Feb 18 09:27:10 np0005623263 python3.9[32198]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:27:11 np0005623263 systemd[1]: session-8.scope: Deactivated successfully.
Feb 18 09:27:11 np0005623263 systemd-logind[831]: Session 8 logged out. Waiting for processes to exit.
Feb 18 09:27:11 np0005623263 systemd-logind[831]: Removed session 8.
Feb 18 09:27:26 np0005623263 systemd-logind[831]: New session 9 of user zuul.
Feb 18 09:27:26 np0005623263 systemd[1]: Started Session 9 of User zuul.
Feb 18 09:27:27 np0005623263 python3.9[32379]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 18 09:27:28 np0005623263 python3.9[32553]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:27:29 np0005623263 python3.9[32706]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:27:30 np0005623263 python3.9[32860]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:27:30 np0005623263 python3.9[33015]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:27:31 np0005623263 python3.9[33170]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:27:32 np0005623263 python3.9[33294]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771424851.1306212-68-209579651651689/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:27:32 np0005623263 python3.9[33447]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:27:33 np0005623263 python3.9[33604]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:27:33 np0005623263 python3.9[33757]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:27:34 np0005623263 python3.9[33907]: ansible-ansible.builtin.service_facts Invoked
Feb 18 09:27:37 np0005623263 python3.9[34161]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:27:37 np0005623263 python3.9[34311]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:27:38 np0005623263 python3.9[34465]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:27:39 np0005623263 python3.9[34624]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:27:40 np0005623263 python3.9[34709]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:28:33 np0005623263 systemd[1]: Reloading.
Feb 18 09:28:33 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:28:33 np0005623263 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 18 09:28:34 np0005623263 systemd[1]: Reloading.
Feb 18 09:28:34 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:28:34 np0005623263 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 18 09:28:34 np0005623263 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 18 09:28:34 np0005623263 systemd[1]: Reloading.
Feb 18 09:28:34 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:28:34 np0005623263 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 18 09:28:34 np0005623263 dbus-broker-launch[819]: Noticed file-system modification, trigger reload.
Feb 18 09:28:34 np0005623263 dbus-broker-launch[819]: Noticed file-system modification, trigger reload.
Feb 18 09:29:33 np0005623263 kernel: SELinux:  Converting 2727 SID table entries...
Feb 18 09:29:33 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:29:33 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:29:33 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:29:33 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:29:33 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:29:33 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:29:33 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:29:33 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 18 09:29:33 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:29:33 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:29:33 np0005623263 systemd[1]: Reloading.
Feb 18 09:29:33 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:29:33 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:29:34 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:29:34 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:29:34 np0005623263 systemd[1]: run-r5502e4fd0f0c4b68bba714a46b1c8798.service: Deactivated successfully.
Feb 18 09:29:34 np0005623263 python3.9[36269]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:29:36 np0005623263 python3.9[36551]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 18 09:29:37 np0005623263 python3.9[36704]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 18 09:29:40 np0005623263 python3.9[36859]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:29:41 np0005623263 python3.9[37012]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 18 09:29:42 np0005623263 python3.9[37165]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:29:43 np0005623263 python3.9[37318]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:29:43 np0005623263 python3.9[37442]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771424982.898687-231-11510582403349/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:29:44 np0005623263 python3.9[37595]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:29:45 np0005623263 python3.9[37748]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:29:48 np0005623263 python3.9[37902]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:29:49 np0005623263 python3.9[38056]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 18 09:29:49 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:29:49 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:29:49 np0005623263 python3.9[38211]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 18 09:29:50 np0005623263 python3.9[38370]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 18 09:29:51 np0005623263 python3.9[38531]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 18 09:29:51 np0005623263 python3.9[38685]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 18 09:29:52 np0005623263 python3.9[38844]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 18 09:29:53 np0005623263 python3.9[38997]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:29:56 np0005623263 python3.9[39151]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:29:57 np0005623263 python3.9[39304]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:29:57 np0005623263 python3.9[39428]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771424996.7027094-350-225937087723824/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:29:58 np0005623263 python3.9[39581]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:29:58 np0005623263 systemd[1]: Starting Load Kernel Modules...
Feb 18 09:29:58 np0005623263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 18 09:29:58 np0005623263 kernel: Bridge firewalling registered
Feb 18 09:29:58 np0005623263 systemd-modules-load[39585]: Inserted module 'br_netfilter'
Feb 18 09:29:58 np0005623263 systemd[1]: Finished Load Kernel Modules.
Feb 18 09:29:59 np0005623263 python3.9[39741]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:29:59 np0005623263 python3.9[39865]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771424998.8894002-373-248491088164133/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:30:00 np0005623263 python3.9[40018]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:30:05 np0005623263 dbus-broker-launch[819]: Noticed file-system modification, trigger reload.
Feb 18 09:30:05 np0005623263 dbus-broker-launch[819]: Noticed file-system modification, trigger reload.
Feb 18 09:30:05 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:30:05 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:30:05 np0005623263 systemd[1]: Reloading.
Feb 18 09:30:05 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:30:05 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:30:07 np0005623263 python3.9[41948]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:30:07 np0005623263 python3.9[43297]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 18 09:30:08 np0005623263 python3.9[44118]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:30:08 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:30:08 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:30:08 np0005623263 systemd[1]: man-db-cache-update.service: Consumed 3.384s CPU time.
Feb 18 09:30:08 np0005623263 systemd[1]: run-r8b24592e8b5245c598e16777aa8aef36.service: Deactivated successfully.
Feb 18 09:30:09 np0005623263 python3.9[44272]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:30:09 np0005623263 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 18 09:30:09 np0005623263 systemd[1]: Starting Authorization Manager...
Feb 18 09:30:09 np0005623263 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 18 09:30:09 np0005623263 polkitd[44489]: Started polkitd version 0.117
Feb 18 09:30:09 np0005623263 systemd[1]: Started Authorization Manager.
Feb 18 09:30:10 np0005623263 python3.9[44660]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:30:10 np0005623263 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 18 09:30:10 np0005623263 systemd[1]: tuned.service: Deactivated successfully.
Feb 18 09:30:10 np0005623263 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 18 09:30:10 np0005623263 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 18 09:30:10 np0005623263 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 18 09:30:11 np0005623263 python3.9[44822]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 18 09:30:13 np0005623263 python3.9[44975]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:30:13 np0005623263 systemd[1]: Reloading.
Feb 18 09:30:13 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:30:13 np0005623263 systemd[1]: Starting dnf makecache...
Feb 18 09:30:13 np0005623263 dnf[45020]: Failed determining last makecache time.
Feb 18 09:30:13 np0005623263 dnf[45020]: delorean-openstack-barbican-42b4c41831408a8e323 114 kB/s | 3.0 kB     00:00
Feb 18 09:30:13 np0005623263 dnf[45020]: delorean-python-glean-642fffe0203a8ffcc2443db52 176 kB/s | 3.0 kB     00:00
Feb 18 09:30:13 np0005623263 dnf[45020]: delorean-openstack-cinder-1c00d6490d88e436f26ef 194 kB/s | 3.0 kB     00:00
Feb 18 09:30:13 np0005623263 dnf[45020]: delorean-python-stevedore-c4acc5639fd2329372142 169 kB/s | 3.0 kB     00:00
Feb 18 09:30:13 np0005623263 dnf[45020]: delorean-python-cloudkitty-tests-tempest-783703 159 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-diskimage-builder-61b717cc45660834fe9a 148 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-nova-eaa65f0b85123a4ee343246 183 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-python-designate-tests-tempest-347fdbc 173 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-glance-1fd12c29b339f30fe823e 198 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 191 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-manila-d783d10e75495b73866db 198 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-neutron-95cadbd379667c8520c8 187 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-octavia-5975097dd4b021385178 181 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-watcher-c014f81a8647287f6dcc 189 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-python-tcib-78032d201b02cee27e8e644c61 194 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 183 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-swift-dc98a8463506ac520c469a 191 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-python-tempestconf-8515371b7cceebd4282 200 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: delorean-openstack-heat-ui-013accbfd179753bc3f0 182 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 python3.9[45179]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:30:14 np0005623263 dnf[45020]: CentOS Stream 9 - BaseOS                         72 kB/s | 7.0 kB     00:00
Feb 18 09:30:14 np0005623263 systemd[1]: Reloading.
Feb 18 09:30:14 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:30:14 np0005623263 dnf[45020]: CentOS Stream 9 - AppStream                      71 kB/s | 7.1 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: CentOS Stream 9 - CRB                            70 kB/s | 6.9 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: CentOS Stream 9 - Extras packages                74 kB/s | 7.6 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: dlrn-antelope-testing                           186 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: dlrn-antelope-build-deps                        205 kB/s | 3.0 kB     00:00
Feb 18 09:30:14 np0005623263 dnf[45020]: centos9-rabbitmq                                122 kB/s | 3.0 kB     00:00
Feb 18 09:30:15 np0005623263 dnf[45020]: centos9-storage                                 113 kB/s | 3.0 kB     00:00
Feb 18 09:30:15 np0005623263 dnf[45020]: centos9-opstools                                142 kB/s | 3.0 kB     00:00
Feb 18 09:30:15 np0005623263 dnf[45020]: NFV SIG OpenvSwitch                             139 kB/s | 3.0 kB     00:00
Feb 18 09:30:15 np0005623263 dnf[45020]: repo-setup-centos-appstream                     160 kB/s | 4.4 kB     00:00
Feb 18 09:30:15 np0005623263 python3.9[45395]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:30:15 np0005623263 dnf[45020]: repo-setup-centos-baseos                        143 kB/s | 3.9 kB     00:00
Feb 18 09:30:15 np0005623263 dnf[45020]: repo-setup-centos-highavailability              126 kB/s | 3.9 kB     00:00
Feb 18 09:30:15 np0005623263 dnf[45020]: repo-setup-centos-powertools                    186 kB/s | 4.3 kB     00:00
Feb 18 09:30:15 np0005623263 dnf[45020]: Extra Packages for Enterprise Linux 9 - x86_64  210 kB/s |  27 kB     00:00
Feb 18 09:30:15 np0005623263 python3.9[45566]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:30:15 np0005623263 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 18 09:30:16 np0005623263 python3.9[45720]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:30:16 np0005623263 dnf[45020]: Metadata cache created.
Feb 18 09:30:16 np0005623263 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 18 09:30:16 np0005623263 systemd[1]: Finished dnf makecache.
Feb 18 09:30:16 np0005623263 systemd[1]: dnf-makecache.service: Consumed 2.085s CPU time.
Feb 18 09:30:18 np0005623263 python3.9[45883]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:30:18 np0005623263 python3.9[46037]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:30:18 np0005623263 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 18 09:30:18 np0005623263 systemd[1]: Stopped Apply Kernel Variables.
Feb 18 09:30:18 np0005623263 systemd[1]: Stopping Apply Kernel Variables...
Feb 18 09:30:18 np0005623263 systemd[1]: Starting Apply Kernel Variables...
Feb 18 09:30:18 np0005623263 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 18 09:30:18 np0005623263 systemd[1]: Finished Apply Kernel Variables.
Feb 18 09:30:19 np0005623263 systemd[1]: session-9.scope: Deactivated successfully.
Feb 18 09:30:19 np0005623263 systemd[1]: session-9.scope: Consumed 2min 16.634s CPU time.
Feb 18 09:30:19 np0005623263 systemd-logind[831]: Session 9 logged out. Waiting for processes to exit.
Feb 18 09:30:19 np0005623263 systemd-logind[831]: Removed session 9.
Feb 18 09:30:25 np0005623263 systemd-logind[831]: New session 10 of user zuul.
Feb 18 09:30:25 np0005623263 systemd[1]: Started Session 10 of User zuul.
Feb 18 09:30:26 np0005623263 python3.9[46220]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:30:27 np0005623263 python3.9[46374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:30:28 np0005623263 python3.9[46533]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:30:29 np0005623263 python3.9[46684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:30:30 np0005623263 python3.9[46841]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:30:31 np0005623263 python3.9[46926]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:30:33 np0005623263 python3.9[47080]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:30:34 np0005623263 python3.9[47252]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:30:35 np0005623263 python3.9[47405]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:30:35 np0005623263 systemd[1]: var-lib-containers-storage-overlay-compat2528700394-merged.mount: Deactivated successfully.
Feb 18 09:30:35 np0005623263 podman[47406]: 2026-02-18 14:30:35.204305954 +0000 UTC m=+0.070517042 system refresh
Feb 18 09:30:35 np0005623263 python3.9[47569]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:30:36 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:30:36 np0005623263 python3.9[47693]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425035.3605545-104-10217554877576/.source.json follow=False _original_basename=podman_network_config.j2 checksum=6cadb026ac4954f74618da557c5c1642182ef73b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:30:37 np0005623263 python3.9[47846]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:30:37 np0005623263 python3.9[47970]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425036.6273465-119-33075881587720/.source.conf follow=False _original_basename=registries.conf.j2 checksum=3be7a60f934c092075c2da93762d4d72f2e4c224 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:30:38 np0005623263 python3.9[48123]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:30:38 np0005623263 python3.9[48276]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:30:39 np0005623263 python3.9[48429]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:30:39 np0005623263 python3.9[48582]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:30:40 np0005623263 python3.9[48732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:30:41 np0005623263 python3.9[48887]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:30:43 np0005623263 python3.9[49041]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:30:45 np0005623263 python3.9[49203]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:30:47 np0005623263 python3.9[49357]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:30:50 np0005623263 python3.9[49513]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:30:53 np0005623263 python3.9[49670]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:30:56 np0005623263 python3.9[49840]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:30:58 np0005623263 python3.9[49994]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:31:12 np0005623263 python3.9[50330]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:31:14 np0005623263 python3.9[50487]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:31:17 np0005623263 python3.9[50645]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:31:17 np0005623263 python3.9[50821]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:31:18 np0005623263 python3.9[50945]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1771425077.3268032-277-6704594073154/.source.json _original_basename=.wtxx_uto follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:31:19 np0005623263 python3.9[51098]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 18 09:31:19 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:21 np0005623263 systemd[1]: var-lib-containers-storage-overlay-compat2674859091-lower\x2dmapped.mount: Deactivated successfully.
Feb 18 09:31:26 np0005623263 podman[51111]: 2026-02-18 14:31:26.947552864 +0000 UTC m=+7.520052817 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 18 09:31:26 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:26 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:27 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:27 np0005623263 python3.9[51410]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 18 09:31:40 np0005623263 podman[51423]: 2026-02-18 14:31:40.354959596 +0000 UTC m=+12.585201364 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 09:31:40 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:40 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:40 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:41 np0005623263 python3.9[51722]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 18 09:31:58 np0005623263 podman[51734]: 2026-02-18 14:31:58.560869221 +0000 UTC m=+17.279175813 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 18 09:31:58 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:58 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:58 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:31:59 np0005623263 python3.9[52004]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 18 09:32:19 np0005623263 podman[52016]: 2026-02-18 14:32:19.96688933 +0000 UTC m=+20.458185071 image pull 9fa363bd42d663260cc49627681950f89a91ed46f9f0b14cd192f3e44463c471 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Feb 18 09:32:19 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:20 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:20 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:20 np0005623263 python3.9[52347]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 18 09:32:22 np0005623263 podman[52360]: 2026-02-18 14:32:22.235397727 +0000 UTC m=+1.460357435 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Feb 18 09:32:22 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:22 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:22 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:23 np0005623263 python3.9[52639]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 18 09:32:26 np0005623263 podman[52651]: 2026-02-18 14:32:26.614246495 +0000 UTC m=+3.534855896 image pull 5a0c248a731dc2e1754b1906fede374f0f92203547e5b10eb435ef1a64b36296 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Feb 18 09:32:26 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:26 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:26 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:27 np0005623263 python3.9[52908]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 18 09:32:27 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:37 np0005623263 podman[52920]: 2026-02-18 14:32:37.203336556 +0000 UTC m=+9.530541361 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Feb 18 09:32:37 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:37 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:37 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:32:37 np0005623263 systemd[1]: session-10.scope: Deactivated successfully.
Feb 18 09:32:37 np0005623263 systemd[1]: session-10.scope: Consumed 2min 48.437s CPU time.
Feb 18 09:32:37 np0005623263 systemd-logind[831]: Session 10 logged out. Waiting for processes to exit.
Feb 18 09:32:37 np0005623263 systemd-logind[831]: Removed session 10.
Feb 18 09:32:42 np0005623263 systemd-logind[831]: New session 11 of user zuul.
Feb 18 09:32:42 np0005623263 systemd[1]: Started Session 11 of User zuul.
Feb 18 09:32:43 np0005623263 python3.9[53358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:32:44 np0005623263 python3.9[53515]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 18 09:32:45 np0005623263 python3.9[53669]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 18 09:32:46 np0005623263 python3.9[53828]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 18 09:32:47 np0005623263 python3.9[53989]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:32:48 np0005623263 python3.9[54074]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:32:50 np0005623263 python3.9[54237]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:33:07 np0005623263 kernel: SELinux:  Converting 2741 SID table entries...
Feb 18 09:33:08 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:33:08 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:33:08 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:33:08 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:33:08 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:33:08 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:33:08 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:33:08 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 18 09:33:08 np0005623263 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 18 09:33:10 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:33:10 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:33:10 np0005623263 systemd[1]: Reloading.
Feb 18 09:33:10 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:33:10 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:33:10 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:33:10 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:33:10 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:33:10 np0005623263 systemd[1]: run-rcd7d70ef8ff34c56b8242498dfcc7a77.service: Deactivated successfully.
Feb 18 09:33:11 np0005623263 python3.9[55362]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:33:11 np0005623263 systemd[1]: Reloading.
Feb 18 09:33:12 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:33:12 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:33:12 np0005623263 systemd[1]: Starting Open vSwitch Database Unit...
Feb 18 09:33:12 np0005623263 chown[55410]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 18 09:33:12 np0005623263 ovs-ctl[55415]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 18 09:33:12 np0005623263 ovs-ctl[55415]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 18 09:33:12 np0005623263 ovs-ctl[55415]: Starting ovsdb-server [  OK  ]
Feb 18 09:33:12 np0005623263 ovs-vsctl[55464]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 18 09:33:12 np0005623263 ovs-vsctl[55484]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"bff0df27-aa33-4d98-b417-cc9248f7a486\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 18 09:33:12 np0005623263 ovs-ctl[55415]: Configuring Open vSwitch system IDs [  OK  ]
Feb 18 09:33:12 np0005623263 ovs-ctl[55415]: Enabling remote OVSDB managers [  OK  ]
Feb 18 09:33:12 np0005623263 systemd[1]: Started Open vSwitch Database Unit.
Feb 18 09:33:12 np0005623263 ovs-vsctl[55490]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 18 09:33:12 np0005623263 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 18 09:33:12 np0005623263 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 18 09:33:12 np0005623263 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 18 09:33:12 np0005623263 kernel: openvswitch: Open vSwitch switching datapath
Feb 18 09:33:12 np0005623263 ovs-ctl[55534]: Inserting openvswitch module [  OK  ]
Feb 18 09:33:12 np0005623263 ovs-ctl[55503]: Starting ovs-vswitchd [  OK  ]
Feb 18 09:33:12 np0005623263 ovs-vsctl[55551]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 18 09:33:12 np0005623263 ovs-ctl[55503]: Enabling remote OVSDB managers [  OK  ]
Feb 18 09:33:12 np0005623263 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 18 09:33:12 np0005623263 systemd[1]: Starting Open vSwitch...
Feb 18 09:33:12 np0005623263 systemd[1]: Finished Open vSwitch.
Feb 18 09:33:13 np0005623263 python3.9[55703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:33:14 np0005623263 python3.9[55856]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 18 09:33:15 np0005623263 kernel: SELinux:  Converting 2755 SID table entries...
Feb 18 09:33:15 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:33:15 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:33:15 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:33:15 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:33:15 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:33:15 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:33:15 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:33:16 np0005623263 python3.9[56011]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:33:17 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 18 09:33:17 np0005623263 python3.9[56170]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:33:19 np0005623263 python3.9[56324]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:33:20 np0005623263 python3.9[56612]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 18 09:33:21 np0005623263 python3.9[56762]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:33:22 np0005623263 python3.9[56917]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:33:24 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:33:24 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:33:24 np0005623263 systemd[1]: Reloading.
Feb 18 09:33:24 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:33:24 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:33:24 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:33:26 np0005623263 python3.9[57240]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:33:26 np0005623263 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 18 09:33:26 np0005623263 systemd[1]: Stopped Network Manager Wait Online.
Feb 18 09:33:26 np0005623263 systemd[1]: Stopping Network Manager Wait Online...
Feb 18 09:33:26 np0005623263 systemd[1]: Stopping Network Manager...
Feb 18 09:33:26 np0005623263 NetworkManager[7681]: <info>  [1771425206.0476] caught SIGTERM, shutting down normally.
Feb 18 09:33:26 np0005623263 NetworkManager[7681]: <info>  [1771425206.0485] dhcp4 (eth0): canceled DHCP transaction
Feb 18 09:33:26 np0005623263 NetworkManager[7681]: <info>  [1771425206.0485] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 18 09:33:26 np0005623263 NetworkManager[7681]: <info>  [1771425206.0485] dhcp4 (eth0): state changed no lease
Feb 18 09:33:26 np0005623263 NetworkManager[7681]: <info>  [1771425206.0487] manager: NetworkManager state is now CONNECTED_SITE
Feb 18 09:33:26 np0005623263 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 18 09:33:26 np0005623263 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 18 09:33:26 np0005623263 NetworkManager[7681]: <info>  [1771425206.1721] exiting (success)
Feb 18 09:33:26 np0005623263 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 18 09:33:26 np0005623263 systemd[1]: Stopped Network Manager.
Feb 18 09:33:26 np0005623263 systemd[1]: NetworkManager.service: Consumed 12.748s CPU time, 4.1M memory peak, read 0B from disk, written 19.0K to disk.
Feb 18 09:33:26 np0005623263 systemd[1]: Starting Network Manager...
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2264] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:054a86f0-adcd-4801-9dec-c21d7e1147c9)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2265] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2316] manager[0x56274adc9000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 18 09:33:26 np0005623263 systemd[1]: Starting Hostname Service...
Feb 18 09:33:26 np0005623263 systemd[1]: Started Hostname Service.
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2935] hostname: hostname: using hostnamed
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2935] hostname: static hostname changed from (none) to "compute-0"
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2941] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2947] manager[0x56274adc9000]: rfkill: Wi-Fi hardware radio set enabled
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2948] manager[0x56274adc9000]: rfkill: WWAN hardware radio set enabled
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2983] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2997] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2998] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2998] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.2999] manager: Networking is enabled by state file
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3004] settings: Loaded settings plugin: keyfile (internal)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3009] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3038] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3051] dhcp: init: Using DHCP client 'internal'
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3055] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3062] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3069] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3080] device (lo): Activation: starting connection 'lo' (b53812ff-8ea4-495d-a77c-9332883d7f99)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3087] device (eth0): carrier: link connected
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3091] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3096] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3098] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3104] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3112] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3117] device (eth1): carrier: link connected
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3122] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3128] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (dc274909-fe3e-5763-a5bf-c5286e2af0ce) (indicated)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3129] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3135] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3142] device (eth1): Activation: starting connection 'ci-private-network' (dc274909-fe3e-5763-a5bf-c5286e2af0ce)
Feb 18 09:33:26 np0005623263 systemd[1]: Started Network Manager.
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3148] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3160] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3163] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3168] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3172] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3178] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3182] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3187] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3211] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3220] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3223] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3230] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3243] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3255] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3258] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3263] device (lo): Activation: successful, device activated.
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3272] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.3281] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 18 09:33:26 np0005623263 systemd[1]: Starting Network Manager Wait Online...
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5737] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5751] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5756] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5759] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5763] device (eth1): Activation: successful, device activated.
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5777] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5801] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5807] manager: NetworkManager state is now CONNECTED_SITE
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5811] device (eth0): Activation: successful, device activated.
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5817] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 18 09:33:26 np0005623263 NetworkManager[57258]: <info>  [1771425206.5823] manager: startup complete
Feb 18 09:33:26 np0005623263 systemd[1]: Finished Network Manager Wait Online.
Feb 18 09:33:27 np0005623263 python3.9[57467]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:33:28 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:33:28 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:33:28 np0005623263 systemd[1]: run-rea9ece9ac2ca49e8bd14dc34f4f56830.service: Deactivated successfully.
Feb 18 09:33:31 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:33:31 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:33:31 np0005623263 systemd[1]: Reloading.
Feb 18 09:33:31 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:33:31 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:33:31 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:33:32 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:33:32 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:33:32 np0005623263 systemd[1]: run-rfaa9015afa3c465d8e153f1a5de03fe1.service: Deactivated successfully.
Feb 18 09:33:33 np0005623263 python3.9[57949]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:33:34 np0005623263 python3.9[58102]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:34 np0005623263 python3.9[58257]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:35 np0005623263 python3.9[58410]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:35 np0005623263 python3.9[58563]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:36 np0005623263 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 18 09:33:36 np0005623263 python3.9[58716]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:37 np0005623263 python3.9[58869]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:33:38 np0005623263 python3.9[58993]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425216.9557886-226-243923053183629/.source _original_basename=.fnl8w4ch follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:38 np0005623263 python3.9[59146]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:39 np0005623263 python3.9[59299]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 18 09:33:40 np0005623263 python3.9[59452]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:42 np0005623263 python3.9[59882]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 18 09:33:43 np0005623263 ansible-async_wrapper.py[60060]: Invoked with j390566453806 300 /home/zuul/.ansible/tmp/ansible-tmp-1771425222.2872334-292-163679929602779/AnsiballZ_edpm_os_net_config.py _
Feb 18 09:33:43 np0005623263 ansible-async_wrapper.py[60063]: Starting module and watcher
Feb 18 09:33:43 np0005623263 ansible-async_wrapper.py[60063]: Start watching 60064 (300)
Feb 18 09:33:43 np0005623263 ansible-async_wrapper.py[60064]: Start module (60064)
Feb 18 09:33:43 np0005623263 ansible-async_wrapper.py[60060]: Return async_wrapper task started.
Feb 18 09:33:43 np0005623263 python3.9[60065]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=True purge_provider=
Feb 18 09:33:43 np0005623263 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 18 09:33:43 np0005623263 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 18 09:33:43 np0005623263 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 18 09:33:43 np0005623263 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 18 09:33:43 np0005623263 kernel: cfg80211: failed to load regulatory.db
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0453] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0466] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0861] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0862] audit: op="connection-add" uuid="2556c0e3-8911-461d-b9eb-1476768ea4ff" name="br-ex-br" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0874] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0875] audit: op="connection-add" uuid="3862bfdd-d250-40d0-97da-9dbffdb520bb" name="br-ex-port" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0884] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0884] audit: op="connection-add" uuid="c29c60e6-fd57-4dce-8482-525e12e1fe59" name="eth1-port" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0893] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0894] audit: op="connection-add" uuid="e838c401-9241-4791-a46f-71c17d27b22e" name="vlan20-port" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0902] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0903] audit: op="connection-add" uuid="7fbdf9a1-4293-4bf2-b723-18620746ec7b" name="vlan21-port" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0911] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0912] audit: op="connection-add" uuid="1b7b8b7d-cc60-42b9-bd2f-41346f402222" name="vlan22-port" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0927] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0939] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.0940] audit: op="connection-add" uuid="28c7b39d-8d0b-4bb5-8022-a2af4ea2f2f4" name="br-ex-if" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1135] audit: op="connection-update" uuid="dc274909-fe3e-5763-a5bf-c5286e2af0ce" name="ci-private-network" args="ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.routing-rules,connection.master,connection.slave-type,connection.port-type,connection.controller,connection.timestamp,ovs-interface.type,ovs-external-ids.data,ipv4.routes,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routing-rules" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1161] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1163] audit: op="connection-add" uuid="4c27ad3e-2f8b-43d0-af35-fcf66edefb77" name="vlan20-if" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1186] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1188] audit: op="connection-add" uuid="4928d6ca-58f6-4997-a458-5762a23a5145" name="vlan21-if" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1207] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1209] audit: op="connection-add" uuid="cd8c3db0-172e-4e2a-a616-b3dbf78eb10f" name="vlan22-if" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1222] audit: op="connection-delete" uuid="4601d78c-d28d-3097-8b13-cc6f0dbfbab6" name="Wired connection 1" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1242] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1244] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1252] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1258] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (2556c0e3-8911-461d-b9eb-1476768ea4ff)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1258] audit: op="connection-activate" uuid="2556c0e3-8911-461d-b9eb-1476768ea4ff" name="br-ex-br" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1261] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1262] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1267] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1272] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (3862bfdd-d250-40d0-97da-9dbffdb520bb)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1273] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1274] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1278] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1282] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c29c60e6-fd57-4dce-8482-525e12e1fe59)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1283] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1284] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1288] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1293] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e838c401-9241-4791-a46f-71c17d27b22e)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1295] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1296] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1301] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1305] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7fbdf9a1-4293-4bf2-b723-18620746ec7b)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1307] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1308] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1313] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1317] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (1b7b8b7d-cc60-42b9-bd2f-41346f402222)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1318] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1320] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1321] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1328] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1329] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1333] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1336] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (28c7b39d-8d0b-4bb5-8022-a2af4ea2f2f4)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1337] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1341] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1342] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1344] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1345] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1357] device (eth1): disconnecting for new activation request.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1357] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1377] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1381] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1383] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1390] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1392] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1398] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1409] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4c27ad3e-2f8b-43d0-af35-fcf66edefb77)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1411] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1416] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1420] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1422] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1427] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1428] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1437] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1446] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (4928d6ca-58f6-4997-a458-5762a23a5145)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1447] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1454] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1458] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1461] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1467] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <warn>  [1771425225.1469] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1477] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1486] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (cd8c3db0-172e-4e2a-a616-b3dbf78eb10f)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1487] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1493] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1497] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1500] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1503] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1526] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1530] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1536] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1539] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1551] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1557] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1564] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1570] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1573] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1582] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1590] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1596] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1600] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1609] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1616] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1622] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1625] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1633] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1643] dhcp4 (eth0): canceled DHCP transaction
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1643] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1643] dhcp4 (eth0): state changed no lease
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1647] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1662] audit: op="device-reapply" interface="eth1" ifindex=3 pid=60066 uid=0 result="fail" reason="Device is not activated"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.1897] dhcp4 (eth0): state changed new lease, address=38.102.83.12
Feb 18 09:33:45 np0005623263 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2053] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 kernel: ovs-system: entered promiscuous mode
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2077] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2084] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 18 09:33:45 np0005623263 kernel: Timeout policy base is empty
Feb 18 09:33:45 np0005623263 systemd-udevd[60072]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2226] device (eth1): Activation: starting connection 'ci-private-network' (dc274909-fe3e-5763-a5bf-c5286e2af0ce)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2233] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2237] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2249] device (eth1): disconnecting for new activation request.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2250] audit: op="connection-activate" uuid="dc274909-fe3e-5763-a5bf-c5286e2af0ce" name="ci-private-network" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2254] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2272] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2283] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2288] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2292] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2293] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2293] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2300] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2302] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2304] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=60066 uid=0 result="success"
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2304] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2311] device (eth1): Activation: starting connection 'ci-private-network' (dc274909-fe3e-5763-a5bf-c5286e2af0ce)
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2313] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2316] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2318] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2321] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2324] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2326] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2329] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2331] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2334] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2337] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2341] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2343] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2349] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2360] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2363] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2513] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2515] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2521] device (eth1): Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 kernel: br-ex: entered promiscuous mode
Feb 18 09:33:45 np0005623263 kernel: vlan22: entered promiscuous mode
Feb 18 09:33:45 np0005623263 systemd-udevd[60071]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 09:33:45 np0005623263 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2805] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2814] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2833] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2834] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2838] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2852] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 18 09:33:45 np0005623263 kernel: vlan20: entered promiscuous mode
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.2867] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 kernel: vlan21: entered promiscuous mode
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3030] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3036] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3037] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3041] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3045] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3066] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3072] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3081] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3082] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3085] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3091] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3093] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 18 09:33:45 np0005623263 NetworkManager[57258]: <info>  [1771425225.3098] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 18 09:33:46 np0005623263 NetworkManager[57258]: <info>  [1771425226.3812] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=60066 uid=0 result="success"
Feb 18 09:33:46 np0005623263 NetworkManager[57258]: <info>  [1771425226.5905] checkpoint[0x56274ad9f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 18 09:33:46 np0005623263 NetworkManager[57258]: <info>  [1771425226.5907] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=60066 uid=0 result="success"
Feb 18 09:33:46 np0005623263 NetworkManager[57258]: <info>  [1771425226.8874] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=60066 uid=0 result="success"
Feb 18 09:33:46 np0005623263 NetworkManager[57258]: <info>  [1771425226.8887] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=60066 uid=0 result="success"
Feb 18 09:33:46 np0005623263 python3.9[60406]: ansible-ansible.legacy.async_status Invoked with jid=j390566453806.60060 mode=status _async_dir=/root/.ansible_async
Feb 18 09:33:47 np0005623263 NetworkManager[57258]: <info>  [1771425227.0910] audit: op="networking-control" arg="global-dns-configuration" pid=60066 uid=0 result="success"
Feb 18 09:33:47 np0005623263 NetworkManager[57258]: <info>  [1771425227.0944] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 18 09:33:47 np0005623263 NetworkManager[57258]: <info>  [1771425227.0970] audit: op="networking-control" arg="global-dns-configuration" pid=60066 uid=0 result="success"
Feb 18 09:33:47 np0005623263 NetworkManager[57258]: <info>  [1771425227.0994] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=60066 uid=0 result="success"
Feb 18 09:33:47 np0005623263 NetworkManager[57258]: <info>  [1771425227.2209] checkpoint[0x56274ad9fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 18 09:33:47 np0005623263 NetworkManager[57258]: <info>  [1771425227.2212] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=60066 uid=0 result="success"
Feb 18 09:33:47 np0005623263 ansible-async_wrapper.py[60064]: Module complete (60064)
Feb 18 09:33:48 np0005623263 ansible-async_wrapper.py[60063]: Done in kid B.
Feb 18 09:33:50 np0005623263 python3.9[60512]: ansible-ansible.legacy.async_status Invoked with jid=j390566453806.60060 mode=status _async_dir=/root/.ansible_async
Feb 18 09:33:51 np0005623263 python3.9[60612]: ansible-ansible.legacy.async_status Invoked with jid=j390566453806.60060 mode=cleanup _async_dir=/root/.ansible_async
Feb 18 09:33:51 np0005623263 python3.9[60765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:33:52 np0005623263 python3.9[60889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425231.3211563-319-89393629642535/.source.returncode _original_basename=.17m5nlu7 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:52 np0005623263 python3.9[61042]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:33:53 np0005623263 python3.9[61166]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425232.4710665-335-601630554964/.source.cfg _original_basename=.hvz9f4cp follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:33:54 np0005623263 python3.9[61320]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:33:54 np0005623263 systemd[1]: Reloading Network Manager...
Feb 18 09:33:54 np0005623263 NetworkManager[57258]: <info>  [1771425234.1204] audit: op="reload" arg="0" pid=61324 uid=0 result="success"
Feb 18 09:33:54 np0005623263 NetworkManager[57258]: <info>  [1771425234.1211] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 18 09:33:54 np0005623263 systemd[1]: Reloaded Network Manager.
Feb 18 09:33:54 np0005623263 systemd[1]: session-11.scope: Deactivated successfully.
Feb 18 09:33:54 np0005623263 systemd[1]: session-11.scope: Consumed 49.807s CPU time.
Feb 18 09:33:54 np0005623263 systemd-logind[831]: Session 11 logged out. Waiting for processes to exit.
Feb 18 09:33:54 np0005623263 systemd-logind[831]: Removed session 11.
Feb 18 09:33:56 np0005623263 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 18 09:34:04 np0005623263 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 18 09:34:05 np0005623263 systemd-logind[831]: New session 12 of user zuul.
Feb 18 09:34:05 np0005623263 systemd[1]: Started Session 12 of User zuul.
Feb 18 09:34:06 np0005623263 python3.9[61515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:34:07 np0005623263 python3.9[61670]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:34:08 np0005623263 python3.9[61859]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:34:08 np0005623263 systemd[1]: session-12.scope: Deactivated successfully.
Feb 18 09:34:08 np0005623263 systemd[1]: session-12.scope: Consumed 2.100s CPU time.
Feb 18 09:34:08 np0005623263 systemd-logind[831]: Session 12 logged out. Waiting for processes to exit.
Feb 18 09:34:08 np0005623263 systemd-logind[831]: Removed session 12.
Feb 18 09:34:13 np0005623263 systemd-logind[831]: New session 13 of user zuul.
Feb 18 09:34:13 np0005623263 systemd[1]: Started Session 13 of User zuul.
Feb 18 09:34:14 np0005623263 python3.9[62040]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:34:15 np0005623263 python3.9[62194]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:34:16 np0005623263 python3.9[62352]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:34:17 np0005623263 python3.9[62437]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:34:19 np0005623263 python3.9[62591]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:34:20 np0005623263 python3.9[62783]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:21 np0005623263 python3.9[62936]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:34:21 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:34:22 np0005623263 python3.9[63099]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:22 np0005623263 python3.9[63178]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:23 np0005623263 python3.9[63331]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:23 np0005623263 python3.9[63410]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:24 np0005623263 python3.9[63563]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:25 np0005623263 python3.9[63716]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:25 np0005623263 python3.9[63869]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:26 np0005623263 python3.9[64022]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:26 np0005623263 python3.9[64175]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:34:29 np0005623263 python3.9[64329]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:34:29 np0005623263 python3.9[64484]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:34:30 np0005623263 python3.9[64637]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:34:30 np0005623263 python3.9[64790]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:34:31 np0005623263 python3.9[64944]: ansible-service_facts Invoked
Feb 18 09:34:31 np0005623263 network[64961]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 18 09:34:31 np0005623263 network[64962]: 'network-scripts' will be removed from distribution in near future.
Feb 18 09:34:31 np0005623263 network[64963]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 18 09:34:35 np0005623263 python3.9[65420]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:34:38 np0005623263 python3.9[65574]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 18 09:34:39 np0005623263 python3.9[65727]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:39 np0005623263 python3.9[65853]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425278.6765654-227-104393853325114/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:40 np0005623263 python3.9[66008]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:40 np0005623263 python3.9[66134]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425279.966598-242-121111402805010/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:41 np0005623263 python3.9[66289]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:42 np0005623263 python3.9[66444]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:34:43 np0005623263 python3.9[66529]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:34:45 np0005623263 python3.9[66684]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:34:45 np0005623263 python3.9[66769]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:34:45 np0005623263 chronyd[839]: chronyd exiting
Feb 18 09:34:45 np0005623263 systemd[1]: Stopping NTP client/server...
Feb 18 09:34:45 np0005623263 systemd[1]: chronyd.service: Deactivated successfully.
Feb 18 09:34:45 np0005623263 systemd[1]: Stopped NTP client/server.
Feb 18 09:34:45 np0005623263 systemd[1]: Starting NTP client/server...
Feb 18 09:34:45 np0005623263 chronyd[66777]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 18 09:34:45 np0005623263 chronyd[66777]: Frequency -25.248 +/- 0.183 ppm read from /var/lib/chrony/drift
Feb 18 09:34:45 np0005623263 chronyd[66777]: Loaded seccomp filter (level 2)
Feb 18 09:34:45 np0005623263 systemd[1]: Started NTP client/server.
Feb 18 09:34:46 np0005623263 systemd[1]: session-13.scope: Deactivated successfully.
Feb 18 09:34:46 np0005623263 systemd[1]: session-13.scope: Consumed 22.135s CPU time.
Feb 18 09:34:46 np0005623263 systemd-logind[831]: Session 13 logged out. Waiting for processes to exit.
Feb 18 09:34:46 np0005623263 systemd-logind[831]: Removed session 13.
Feb 18 09:34:51 np0005623263 systemd-logind[831]: New session 14 of user zuul.
Feb 18 09:34:51 np0005623263 systemd[1]: Started Session 14 of User zuul.
Feb 18 09:34:52 np0005623263 python3.9[66956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:34:53 np0005623263 python3.9[67113]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:54 np0005623263 python3.9[67289]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:54 np0005623263 python3.9[67368]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.agg5lbzw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:55 np0005623263 python3.9[67521]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:56 np0005623263 python3.9[67645]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425295.0740716-56-139173920789306/.source _original_basename=.cyfel35f follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:34:56 np0005623263 python3.9[67798]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:57 np0005623263 python3.9[67951]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:57 np0005623263 python3.9[68075]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425296.860996-80-112687014256676/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:58 np0005623263 python3.9[68228]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:34:58 np0005623263 python3.9[68352]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425298.126645-80-84676085293339/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:34:59 np0005623263 python3.9[68505]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:00 np0005623263 python3.9[68658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:01 np0005623263 python3.9[68782]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425299.6839907-117-43293136606666/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:01 np0005623263 python3.9[68935]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:02 np0005623263 python3.9[69059]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425301.454386-132-76751492117980/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:03 np0005623263 python3.9[69212]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:35:03 np0005623263 systemd[1]: Reloading.
Feb 18 09:35:03 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:35:03 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:35:03 np0005623263 systemd[1]: Reloading.
Feb 18 09:35:03 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:35:03 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:35:03 np0005623263 systemd[1]: Starting EDPM Container Shutdown...
Feb 18 09:35:03 np0005623263 systemd[1]: Finished EDPM Container Shutdown.
Feb 18 09:35:04 np0005623263 python3.9[69453]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:05 np0005623263 python3.9[69577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425304.0485442-155-137885063298119/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:05 np0005623263 python3.9[69730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:06 np0005623263 python3.9[69856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425305.2572021-170-5319037917806/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:07 np0005623263 python3.9[70009]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:35:07 np0005623263 systemd[1]: Reloading.
Feb 18 09:35:07 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:35:07 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:35:07 np0005623263 systemd[1]: Reloading.
Feb 18 09:35:07 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:35:07 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:35:07 np0005623263 systemd[1]: Starting Create netns directory...
Feb 18 09:35:07 np0005623263 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 18 09:35:07 np0005623263 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 18 09:35:07 np0005623263 systemd[1]: Finished Create netns directory.
Feb 18 09:35:08 np0005623263 python3.9[70249]: ansible-ansible.builtin.service_facts Invoked
Feb 18 09:35:08 np0005623263 network[70266]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 18 09:35:08 np0005623263 network[70267]: 'network-scripts' will be removed from distribution in near future.
Feb 18 09:35:08 np0005623263 network[70268]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 18 09:35:11 np0005623263 python3.9[70532]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:35:11 np0005623263 systemd[1]: Reloading.
Feb 18 09:35:11 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:35:11 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:35:11 np0005623263 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 18 09:35:11 np0005623263 iptables.init[70578]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 18 09:35:11 np0005623263 iptables.init[70578]: iptables: Flushing firewall rules: [  OK  ]
Feb 18 09:35:11 np0005623263 systemd[1]: iptables.service: Deactivated successfully.
Feb 18 09:35:11 np0005623263 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 18 09:35:12 np0005623263 python3.9[70775]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:35:13 np0005623263 python3.9[70930]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:35:13 np0005623263 systemd[1]: Reloading.
Feb 18 09:35:13 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:35:13 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:35:13 np0005623263 systemd[1]: Starting Netfilter Tables...
Feb 18 09:35:13 np0005623263 systemd[1]: Finished Netfilter Tables.
Feb 18 09:35:14 np0005623263 python3.9[71131]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:35:15 np0005623263 python3.9[71285]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:15 np0005623263 python3.9[71411]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425314.9730096-239-49768672771345/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:16 np0005623263 python3.9[71565]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:35:16 np0005623263 systemd[1]: Reloading OpenSSH server daemon...
Feb 18 09:35:16 np0005623263 systemd[1]: Reloaded OpenSSH server daemon.
Feb 18 09:35:17 np0005623263 python3.9[71722]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:17 np0005623263 python3.9[71875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:18 np0005623263 python3.9[71999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425317.4005337-270-98114824152506/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:19 np0005623263 python3.9[72152]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 18 09:35:19 np0005623263 systemd[1]: Starting Time & Date Service...
Feb 18 09:35:19 np0005623263 systemd[1]: Started Time & Date Service.
Feb 18 09:35:19 np0005623263 python3.9[72309]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:20 np0005623263 python3.9[72462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:20 np0005623263 python3.9[72586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425319.8771877-305-198995648116759/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:21 np0005623263 python3.9[72739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:21 np0005623263 python3.9[72863]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425320.8733637-320-140430343098581/.source.yaml _original_basename=.kkl8w3q4 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:22 np0005623263 python3.9[73016]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:22 np0005623263 python3.9[73140]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425321.9406276-335-199314673603967/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:23 np0005623263 python3.9[73293]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:35:24 np0005623263 python3.9[73447]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:35:24 np0005623263 python3[73601]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 18 09:35:25 np0005623263 python3.9[73754]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:25 np0005623263 python3.9[73878]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425325.0535486-374-56143598295341/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:26 np0005623263 python3.9[74031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:27 np0005623263 python3.9[74155]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425326.1041613-389-218399498384193/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:27 np0005623263 python3.9[74308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:28 np0005623263 python3.9[74432]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425327.3650665-404-157737501283513/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:28 np0005623263 python3.9[74585]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:29 np0005623263 python3.9[74709]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425328.3714614-419-150376397835536/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:29 np0005623263 python3.9[74862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:35:30 np0005623263 python3.9[74986]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425329.361804-434-189682098185099/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:31 np0005623263 python3.9[75139]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:31 np0005623263 python3.9[75292]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:35:32 np0005623263 python3.9[75452]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:33 np0005623263 python3.9[75606]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:33 np0005623263 python3.9[75759]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:34 np0005623263 python3.9[75912]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 18 09:35:34 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:35:34 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:35:35 np0005623263 python3.9[76067]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 18 09:35:35 np0005623263 systemd[1]: session-14.scope: Deactivated successfully.
Feb 18 09:35:35 np0005623263 systemd[1]: session-14.scope: Consumed 29.641s CPU time.
Feb 18 09:35:35 np0005623263 systemd-logind[831]: Session 14 logged out. Waiting for processes to exit.
Feb 18 09:35:35 np0005623263 systemd-logind[831]: Removed session 14.
Feb 18 09:35:41 np0005623263 systemd-logind[831]: New session 15 of user zuul.
Feb 18 09:35:41 np0005623263 systemd[1]: Started Session 15 of User zuul.
Feb 18 09:35:41 np0005623263 python3.9[76249]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 18 09:35:42 np0005623263 python3.9[76402]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:35:43 np0005623263 python3.9[76555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:35:44 np0005623263 python3.9[76708]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeK2Mcv16lfn+CJYy13FyTYac5H1RrDvhSmnEdu2ddBUwvVQ3B6Ivm+e+GAdB2nnrrxVXMVyo6syhbtWS2D8pF7beSvmLZmotqogM3FV98EG+bxXCzKaOGwx2fu6iMDfsc4wTmXzaqLIgvKwzPj1wfcGwBsg6+XKkMwwJL3JEPeyigxYdj40tsIr11q+b7CAqNT3941I0kzucH1WYOUBm76gUhqeEuVyIJDIE9VvBJHIgTKkkMJK37BTFwri/ULSX8PArj1BRHhur3TvgvlZUCn5A2+cMOzBVctjgd/Zu8cJvIbjALoLrciPHYnNtMbWcV837DL0D1lClEXCdxdurVYehB1wTp1IwMz2xTH2BDKriu6xsZzOBMOVzpWIy2XW5/Jcsy6D2jWSUwJmSa0wR1yWBryM0OzePrZqyytk9pye6hsiQUjmcy85R0aGadKBYmhbHaMjHzcuG2aI5be6DH0+uZhx9nfSTTTkyBL6m5nIPxmmR9o6bbQT2MhzlPWS0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDysA2g9BTyfdgEGjSsdodv1Y+GhmSKgQHAJyFWViP9r#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEzMwuVHwr+ALUqYZg+3RtXBNmiNMP16Kp9n7+x9nMEjLeNVPS/lEM/npD7XOdCZiRQkL8YVYKUcVqO/mp2O6+w=#012 create=True mode=0644 path=/tmp/ansible.ye8cco5d state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:45 np0005623263 python3.9[76863]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ye8cco5d' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:35:46 np0005623263 python3.9[77018]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ye8cco5d state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:46 np0005623263 systemd[1]: session-15.scope: Deactivated successfully.
Feb 18 09:35:46 np0005623263 systemd[1]: session-15.scope: Consumed 3.117s CPU time.
Feb 18 09:35:46 np0005623263 systemd-logind[831]: Session 15 logged out. Waiting for processes to exit.
Feb 18 09:35:46 np0005623263 systemd-logind[831]: Removed session 15.
Feb 18 09:35:49 np0005623263 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 18 09:35:52 np0005623263 systemd-logind[831]: New session 16 of user zuul.
Feb 18 09:35:52 np0005623263 systemd[1]: Started Session 16 of User zuul.
Feb 18 09:35:53 np0005623263 python3.9[77199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:35:54 np0005623263 python3.9[77356]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 18 09:35:55 np0005623263 python3.9[77511]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:35:55 np0005623263 python3.9[77665]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:35:56 np0005623263 python3.9[77819]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:35:57 np0005623263 python3.9[77974]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:35:57 np0005623263 python3.9[78130]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:35:58 np0005623263 systemd[1]: session-16.scope: Deactivated successfully.
Feb 18 09:35:58 np0005623263 systemd[1]: session-16.scope: Consumed 3.784s CPU time.
Feb 18 09:35:58 np0005623263 systemd-logind[831]: Session 16 logged out. Waiting for processes to exit.
Feb 18 09:35:58 np0005623263 systemd-logind[831]: Removed session 16.
Feb 18 09:36:04 np0005623263 systemd-logind[831]: New session 17 of user zuul.
Feb 18 09:36:04 np0005623263 systemd[1]: Started Session 17 of User zuul.
Feb 18 09:36:05 np0005623263 python3.9[78308]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:36:05 np0005623263 python3.9[78465]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:36:06 np0005623263 python3.9[78550]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 18 09:36:08 np0005623263 python3.9[78701]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:36:10 np0005623263 python3.9[78852]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 18 09:36:10 np0005623263 python3.9[79002]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:36:11 np0005623263 python3.9[79152]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:36:11 np0005623263 systemd-logind[831]: Session 17 logged out. Waiting for processes to exit.
Feb 18 09:36:11 np0005623263 systemd[1]: session-17.scope: Deactivated successfully.
Feb 18 09:36:11 np0005623263 systemd[1]: session-17.scope: Consumed 5.466s CPU time.
Feb 18 09:36:11 np0005623263 systemd-logind[831]: Removed session 17.
Feb 18 09:36:17 np0005623263 systemd-logind[831]: New session 18 of user zuul.
Feb 18 09:36:17 np0005623263 systemd[1]: Started Session 18 of User zuul.
Feb 18 09:36:18 np0005623263 python3.9[79330]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:36:20 np0005623263 python3.9[79487]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:20 np0005623263 python3.9[79640]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:21 np0005623263 python3.9[79793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:21 np0005623263 python3.9[79917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425380.7245505-60-49838402594728/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9da0c410c647b9f9637234a627e3e9b4fa5dfa3d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:22 np0005623263 python3.9[80072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:22 np0005623263 python3.9[80196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425382.082553-60-141829206817621/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=022446511055fcf55354d43d9b67f3eaf41e8b50 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:23 np0005623263 python3.9[80349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:23 np0005623263 python3.9[80473]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425383.0930552-60-1961599818066/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=06473bdfd24a193431bcf37cb40c361e7a513fb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:24 np0005623263 python3.9[80626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:25 np0005623263 python3.9[80779]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:25 np0005623263 python3.9[80932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:26 np0005623263 python3.9[81056]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425385.287773-119-280994678472498/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=cfd0888e10ddb7219d3f434953286a79fdacd97e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:26 np0005623263 python3.9[81209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:27 np0005623263 python3.9[81333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425386.4067712-119-144254506124140/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=022446511055fcf55354d43d9b67f3eaf41e8b50 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:27 np0005623263 python3.9[81486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:28 np0005623263 python3.9[81610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425387.4136786-119-182433284248338/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0a35f7c279b54a0aa2032794e73a0b0966ec4030 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:28 np0005623263 python3.9[81763]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:29 np0005623263 python3.9[81916]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:29 np0005623263 python3.9[82069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:30 np0005623263 python3.9[82193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425389.5149379-178-142049231334623/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f63e08933ed9da4f9e3eeb2b6b7a24c7aa2473f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:30 np0005623263 python3.9[82346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:31 np0005623263 python3.9[82470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425390.5101905-178-27777936094345/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7279e1a31305f4377cdd1f358097b1ccfb2a9ac3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:31 np0005623263 python3.9[82623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:32 np0005623263 python3.9[82747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425391.4945662-178-160276121150306/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=99965bb38bca0ac7fa25212e65211050dc6512b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:33 np0005623263 python3.9[82900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:33 np0005623263 python3.9[83053]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:34 np0005623263 python3.9[83206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:34 np0005623263 python3.9[83330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425393.681209-237-216839622480492/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1e8efa016b014044148d2906af004f2ad56e3504 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:35 np0005623263 python3.9[83483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:35 np0005623263 python3.9[83607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425394.6386442-237-86895276929157/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7d651796e439464ebf38c9c32fc3ccc9c88089cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:36 np0005623263 python3.9[83760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:36 np0005623263 python3.9[83884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425395.6518126-237-95537244693641/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=bda7c3836acb787acc23e44ab2aec56b64735d6a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:37 np0005623263 python3.9[84037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:37 np0005623263 python3.9[84190]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:38 np0005623263 python3.9[84343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:38 np0005623263 python3.9[84467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425397.8476408-296-12854916218652/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=02a7e3d7a26b65cc9ba12032d25e3c911600a46d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:39 np0005623263 python3.9[84620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:39 np0005623263 python3.9[84744]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425398.8675804-296-66869412566898/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7279e1a31305f4377cdd1f358097b1ccfb2a9ac3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:40 np0005623263 python3.9[84897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:40 np0005623263 python3.9[85023]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425399.8622036-296-137265149217187/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7ace501cfde5e9b88da78552541c44f259963522 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:41 np0005623263 python3.9[85176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:42 np0005623263 python3.9[85329]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:43 np0005623263 python3.9[85453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425402.1251187-364-226901740658434/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:43 np0005623263 python3.9[85606]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:44 np0005623263 python3.9[85759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:44 np0005623263 python3.9[85883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425403.9089262-388-135830769642576/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:45 np0005623263 python3.9[86036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:46 np0005623263 python3.9[86189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:46 np0005623263 python3.9[86313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425405.5753124-412-271054026213572/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:47 np0005623263 python3.9[86466]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:47 np0005623263 python3.9[86619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:48 np0005623263 python3.9[86743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425407.3827345-436-263170945677659/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:48 np0005623263 python3.9[86896]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:49 np0005623263 python3.9[87049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:49 np0005623263 python3.9[87173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425409.0358896-460-15527882484661/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:50 np0005623263 python3.9[87326]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:51 np0005623263 python3.9[87479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:51 np0005623263 python3.9[87603]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425410.7214756-484-110471345512280/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:52 np0005623263 python3.9[87756]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:53 np0005623263 python3.9[87909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:53 np0005623263 python3.9[88033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425412.4738512-508-82544111138934/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:54 np0005623263 python3.9[88186]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:36:54 np0005623263 python3.9[88339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:36:55 np0005623263 python3.9[88463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425414.4072638-532-94671196348707/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a862c2e87f657cef02affa33f665475f81aa6bb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:36:55 np0005623263 systemd[1]: session-18.scope: Deactivated successfully.
Feb 18 09:36:55 np0005623263 systemd[1]: session-18.scope: Consumed 29.171s CPU time.
Feb 18 09:36:55 np0005623263 systemd-logind[831]: Session 18 logged out. Waiting for processes to exit.
Feb 18 09:36:55 np0005623263 systemd-logind[831]: Removed session 18.
Feb 18 09:36:56 np0005623263 chronyd[66777]: Selected source 142.4.192.253 (pool.ntp.org)
Feb 18 09:37:00 np0005623263 systemd-logind[831]: New session 19 of user zuul.
Feb 18 09:37:00 np0005623263 systemd[1]: Started Session 19 of User zuul.
Feb 18 09:37:01 np0005623263 python3.9[88641]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:37:02 np0005623263 python3.9[88798]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:03 np0005623263 python3.9[88951]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:03 np0005623263 python3.9[89101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:37:04 np0005623263 python3.9[89254]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 18 09:37:06 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 18 09:37:06 np0005623263 python3.9[89411]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:37:07 np0005623263 python3.9[89496]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:37:09 np0005623263 python3.9[89650]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:37:10 np0005623263 python3[89806]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 18 09:37:11 np0005623263 python3.9[89959]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:11 np0005623263 python3.9[90112]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:12 np0005623263 python3.9[90191]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:12 np0005623263 python3.9[90344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:13 np0005623263 python3.9[90423]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._t0qm6xl recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:13 np0005623263 python3.9[90576]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:14 np0005623263 python3.9[90655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:14 np0005623263 python3.9[90808]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:15 np0005623263 python3[90963]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 18 09:37:16 np0005623263 python3.9[91116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:16 np0005623263 python3.9[91242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425435.7403543-152-106678309081793/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:17 np0005623263 python3.9[91395]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:18 np0005623263 python3.9[91521]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425437.0034237-167-91715018813970/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:18 np0005623263 python3.9[91674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:19 np0005623263 python3.9[91800]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425438.196387-182-110173975483296/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:19 np0005623263 python3.9[91953]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:20 np0005623263 python3.9[92079]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425439.4747005-197-86888402592358/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:21 np0005623263 python3.9[92232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:21 np0005623263 python3.9[92358]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425440.5962803-212-153205776747035/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:22 np0005623263 python3.9[92511]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:22 np0005623263 python3.9[92664]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:23 np0005623263 python3.9[92820]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:24 np0005623263 python3.9[92973]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:24 np0005623263 python3.9[93127]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:37:25 np0005623263 python3.9[93282]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:25 np0005623263 python3.9[93438]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:26 np0005623263 python3.9[93588]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:37:27 np0005623263 python3.9[93742]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:2f:db:26:37" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:27 np0005623263 ovs-vsctl[93743]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:2f:db:26:37 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 18 09:37:28 np0005623263 python3.9[93896]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:28 np0005623263 python3.9[94052]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:28 np0005623263 ovs-vsctl[94053]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 18 09:37:29 np0005623263 python3.9[94203]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:37:30 np0005623263 python3.9[94358]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:30 np0005623263 python3.9[94511]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:31 np0005623263 python3.9[94590]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:31 np0005623263 python3.9[94743]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:31 np0005623263 python3.9[94822]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:32 np0005623263 python3.9[94975]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:32 np0005623263 python3.9[95128]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:33 np0005623263 python3.9[95207]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:33 np0005623263 python3.9[95360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:34 np0005623263 python3.9[95439]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:34 np0005623263 python3.9[95592]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:37:34 np0005623263 systemd[1]: Reloading.
Feb 18 09:37:35 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:37:35 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:37:35 np0005623263 python3.9[95789]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:36 np0005623263 python3.9[95868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:36 np0005623263 python3.9[96021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:36 np0005623263 python3.9[96100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:37 np0005623263 python3.9[96253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:37:37 np0005623263 systemd[1]: Reloading.
Feb 18 09:37:37 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:37:37 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:37:37 np0005623263 systemd[1]: Starting Create netns directory...
Feb 18 09:37:37 np0005623263 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 18 09:37:37 np0005623263 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 18 09:37:37 np0005623263 systemd[1]: Finished Create netns directory.
Feb 18 09:37:38 np0005623263 python3.9[96454]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:39 np0005623263 python3.9[96607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:39 np0005623263 python3.9[96731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425458.6516135-464-169665457877056/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:40 np0005623263 python3.9[96884]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:40 np0005623263 python3.9[97037]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:37:41 np0005623263 python3.9[97190]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:41 np0005623263 python3.9[97314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425460.9357889-497-63676888290715/.source.json _original_basename=.r6_4ml8f follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:42 np0005623263 python3.9[97464]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:43 np0005623263 python3.9[97888]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 18 09:37:44 np0005623263 python3.9[98041]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 18 09:37:45 np0005623263 python3[98194]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 18 09:37:45 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:37:45 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:37:45 np0005623263 podman[98228]: 2026-02-18 14:37:45.859063321 +0000 UTC m=+0.043744325 container create b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 09:37:45 np0005623263 podman[98228]: 2026-02-18 14:37:45.833638439 +0000 UTC m=+0.018319463 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 18 09:37:45 np0005623263 python3[98194]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 18 09:37:46 np0005623263 python3.9[98419]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:37:46 np0005623263 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 18 09:37:47 np0005623263 python3.9[98574]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:47 np0005623263 python3.9[98651]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:37:48 np0005623263 python3.9[98803]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771425467.5576384-575-124192280048070/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:48 np0005623263 python3.9[98880]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:37:48 np0005623263 systemd[1]: Reloading.
Feb 18 09:37:48 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:37:48 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:37:49 np0005623263 python3.9[98999]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:37:49 np0005623263 systemd[1]: Reloading.
Feb 18 09:37:49 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:37:49 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:37:49 np0005623263 systemd[1]: Starting ovn_controller container...
Feb 18 09:37:49 np0005623263 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 18 09:37:49 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:37:49 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c3d85e7de7513a2af43e8aff91fd523cfdf10358a7d46df4f2061c265e6afe/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 18 09:37:49 np0005623263 systemd[1]: Started /usr/bin/podman healthcheck run b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.
Feb 18 09:37:49 np0005623263 podman[99046]: 2026-02-18 14:37:49.707257819 +0000 UTC m=+0.106850499 container init b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:37:49 np0005623263 ovn_controller[99062]: + sudo -E kolla_set_configs
Feb 18 09:37:49 np0005623263 podman[99046]: 2026-02-18 14:37:49.733255715 +0000 UTC m=+0.132848375 container start b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 09:37:49 np0005623263 edpm-start-podman-container[99046]: ovn_controller
Feb 18 09:37:49 np0005623263 systemd[1]: Created slice User Slice of UID 0.
Feb 18 09:37:49 np0005623263 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 18 09:37:49 np0005623263 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 18 09:37:49 np0005623263 systemd[1]: Starting User Manager for UID 0...
Feb 18 09:37:49 np0005623263 edpm-start-podman-container[99045]: Creating additional drop-in dependency for "ovn_controller" (b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164)
Feb 18 09:37:49 np0005623263 podman[99068]: 2026-02-18 14:37:49.803613421 +0000 UTC m=+0.061678588 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 09:37:49 np0005623263 systemd[1]: Reloading.
Feb 18 09:37:49 np0005623263 systemd[99098]: Queued start job for default target Main User Target.
Feb 18 09:37:49 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:37:49 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:37:49 np0005623263 systemd[99098]: Created slice User Application Slice.
Feb 18 09:37:49 np0005623263 systemd[99098]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 18 09:37:49 np0005623263 systemd[99098]: Started Daily Cleanup of User's Temporary Directories.
Feb 18 09:37:49 np0005623263 systemd[99098]: Reached target Paths.
Feb 18 09:37:49 np0005623263 systemd[99098]: Reached target Timers.
Feb 18 09:37:49 np0005623263 systemd[99098]: Starting D-Bus User Message Bus Socket...
Feb 18 09:37:49 np0005623263 systemd[99098]: Starting Create User's Volatile Files and Directories...
Feb 18 09:37:49 np0005623263 systemd[99098]: Listening on D-Bus User Message Bus Socket.
Feb 18 09:37:49 np0005623263 systemd[99098]: Reached target Sockets.
Feb 18 09:37:49 np0005623263 systemd[99098]: Finished Create User's Volatile Files and Directories.
Feb 18 09:37:49 np0005623263 systemd[99098]: Reached target Basic System.
Feb 18 09:37:49 np0005623263 systemd[99098]: Reached target Main User Target.
Feb 18 09:37:49 np0005623263 systemd[99098]: Startup finished in 88ms.
Feb 18 09:37:49 np0005623263 systemd[1]: Started User Manager for UID 0.
Feb 18 09:37:49 np0005623263 systemd[1]: Started ovn_controller container.
Feb 18 09:37:49 np0005623263 systemd[1]: b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164-69667f46349efb32.service: Main process exited, code=exited, status=1/FAILURE
Feb 18 09:37:49 np0005623263 systemd[1]: b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164-69667f46349efb32.service: Failed with result 'exit-code'.
Feb 18 09:37:50 np0005623263 systemd[1]: Started Session c1 of User root.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: INFO:__main__:Validating config file
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: INFO:__main__:Writing out command to execute
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: ++ cat /run_command
Feb 18 09:37:50 np0005623263 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + ARGS=
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + sudo kolla_copy_cacerts
Feb 18 09:37:50 np0005623263 systemd[1]: Started Session c2 of User root.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + [[ ! -n '' ]]
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + . kolla_extend_start
Feb 18 09:37:50 np0005623263 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + umask 0022
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2147] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2156] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <warn>  [1771425470.2159] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2174] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2181] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Feb 18 09:37:50 np0005623263 kernel: br-int: entered promiscuous mode
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2193] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 18 09:37:50 np0005623263 ovn_controller[99062]: 2026-02-18T14:37:50Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2329] manager: (ovn-ff5131-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 18 09:37:50 np0005623263 systemd-udevd[99234]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 09:37:50 np0005623263 kernel: genev_sys_6081: entered promiscuous mode
Feb 18 09:37:50 np0005623263 systemd-udevd[99232]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2522] device (genev_sys_6081): carrier: link connected
Feb 18 09:37:50 np0005623263 NetworkManager[57258]: <info>  [1771425470.2526] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Feb 18 09:37:50 np0005623263 python3.9[99334]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 18 09:37:51 np0005623263 python3.9[99487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:37:51 np0005623263 python3.9[99613]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425471.0607722-620-199410282181707/.source.yaml _original_basename=.hqmgxy65 follow=False checksum=dc8f297bdb19e7991db99625a88e4caddca8ba19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:37:52 np0005623263 python3.9[99766]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:52 np0005623263 ovs-vsctl[99767]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 18 09:37:52 np0005623263 python3.9[99920]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:52 np0005623263 ovs-vsctl[99922]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 18 09:37:53 np0005623263 python3.9[100076]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:37:53 np0005623263 ovs-vsctl[100077]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 18 09:37:54 np0005623263 systemd-logind[831]: Session 19 logged out. Waiting for processes to exit.
Feb 18 09:37:54 np0005623263 systemd[1]: session-19.scope: Deactivated successfully.
Feb 18 09:37:54 np0005623263 systemd[1]: session-19.scope: Consumed 38.336s CPU time.
Feb 18 09:37:54 np0005623263 systemd-logind[831]: Removed session 19.
Feb 18 09:38:00 np0005623263 systemd[1]: Stopping User Manager for UID 0...
Feb 18 09:38:00 np0005623263 systemd[99098]: Activating special unit Exit the Session...
Feb 18 09:38:00 np0005623263 systemd[99098]: Stopped target Main User Target.
Feb 18 09:38:00 np0005623263 systemd[99098]: Stopped target Basic System.
Feb 18 09:38:00 np0005623263 systemd[99098]: Stopped target Paths.
Feb 18 09:38:00 np0005623263 systemd[99098]: Stopped target Sockets.
Feb 18 09:38:00 np0005623263 systemd[99098]: Stopped target Timers.
Feb 18 09:38:00 np0005623263 systemd[99098]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 18 09:38:00 np0005623263 systemd[99098]: Closed D-Bus User Message Bus Socket.
Feb 18 09:38:00 np0005623263 systemd[99098]: Stopped Create User's Volatile Files and Directories.
Feb 18 09:38:00 np0005623263 systemd[99098]: Removed slice User Application Slice.
Feb 18 09:38:00 np0005623263 systemd[99098]: Reached target Shutdown.
Feb 18 09:38:00 np0005623263 systemd[99098]: Finished Exit the Session.
Feb 18 09:38:00 np0005623263 systemd[99098]: Reached target Exit the Session.
Feb 18 09:38:00 np0005623263 systemd[1]: user@0.service: Deactivated successfully.
Feb 18 09:38:00 np0005623263 systemd[1]: Stopped User Manager for UID 0.
Feb 18 09:38:00 np0005623263 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 18 09:38:00 np0005623263 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 18 09:38:00 np0005623263 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 18 09:38:00 np0005623263 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 18 09:38:00 np0005623263 systemd[1]: Removed slice User Slice of UID 0.
Feb 18 09:38:00 np0005623263 systemd-logind[831]: New session 21 of user zuul.
Feb 18 09:38:00 np0005623263 systemd[1]: Started Session 21 of User zuul.
Feb 18 09:38:01 np0005623263 python3.9[100259]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:38:02 np0005623263 python3.9[100416]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:03 np0005623263 python3.9[100569]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:03 np0005623263 python3.9[100722]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:04 np0005623263 python3.9[100875]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:04 np0005623263 python3.9[101028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:05 np0005623263 python3.9[101178]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:38:06 np0005623263 python3.9[101331]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 18 09:38:07 np0005623263 python3.9[101481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:07 np0005623263 python3.9[101602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425486.7898765-81-94465800090562/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:08 np0005623263 python3.9[101753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:08 np0005623263 python3.9[101874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425488.0825405-96-242534597773012/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:09 np0005623263 python3.9[102027]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:38:10 np0005623263 python3.9[102112]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:38:13 np0005623263 python3.9[102266]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:38:13 np0005623263 python3.9[102419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:14 np0005623263 python3.9[102540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425493.3361754-133-100526533916157/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:14 np0005623263 python3.9[102692]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:15 np0005623263 python3.9[102813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425494.2635732-133-204875947352233/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:16 np0005623263 python3.9[102965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:16 np0005623263 python3.9[103086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425495.823254-177-170923514042406/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:17 np0005623263 python3.9[103236]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:17 np0005623263 python3.9[103357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425496.8855853-177-86477450938131/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:18 np0005623263 python3.9[103507]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:38:18 np0005623263 python3.9[103662]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:19 np0005623263 python3.9[103815]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:19 np0005623263 python3.9[103894]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:20 np0005623263 ovn_controller[99062]: 2026-02-18T14:38:20Z|00025|memory|INFO|16128 kB peak resident set size after 29.9 seconds
Feb 18 09:38:20 np0005623263 ovn_controller[99062]: 2026-02-18T14:38:20Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb 18 09:38:20 np0005623263 podman[104046]: 2026-02-18 14:38:20.125792515 +0000 UTC m=+0.089046845 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 09:38:20 np0005623263 python3.9[104048]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:20 np0005623263 python3.9[104153]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:21 np0005623263 python3.9[104306]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:21 np0005623263 python3.9[104459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:22 np0005623263 python3.9[104538]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:22 np0005623263 python3.9[104691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:22 np0005623263 python3.9[104770]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:23 np0005623263 python3.9[104923]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:23 np0005623263 systemd[1]: Reloading.
Feb 18 09:38:23 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:38:23 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:38:24 np0005623263 python3.9[105120]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:24 np0005623263 python3.9[105199]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:25 np0005623263 python3.9[105352]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:25 np0005623263 python3.9[105431]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:26 np0005623263 python3.9[105584]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:26 np0005623263 systemd[1]: Reloading.
Feb 18 09:38:26 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:38:26 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:38:26 np0005623263 systemd[1]: Starting Create netns directory...
Feb 18 09:38:26 np0005623263 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 18 09:38:26 np0005623263 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 18 09:38:26 np0005623263 systemd[1]: Finished Create netns directory.
Feb 18 09:38:27 np0005623263 python3.9[105784]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:28 np0005623263 python3.9[105937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:28 np0005623263 python3.9[106061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425507.7319143-328-260880395696820/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:29 np0005623263 python3.9[106214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:29 np0005623263 python3.9[106367]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:38:30 np0005623263 python3.9[106520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:30 np0005623263 python3.9[106644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425510.1306252-361-87152337997911/.source.json _original_basename=.illqphcj follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:31 np0005623263 python3.9[106794]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:33 np0005623263 python3.9[107218]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 18 09:38:34 np0005623263 python3.9[107371]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 18 09:38:35 np0005623263 python3[107524]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 18 09:38:35 np0005623263 podman[107561]: 2026-02-18 14:38:35.493149859 +0000 UTC m=+0.053583062 container create 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 18 09:38:35 np0005623263 podman[107561]: 2026-02-18 14:38:35.457724516 +0000 UTC m=+0.018157759 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 09:38:35 np0005623263 python3[107524]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 09:38:36 np0005623263 python3.9[107752]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:38:36 np0005623263 python3.9[107907]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:37 np0005623263 python3.9[107984]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:38:37 np0005623263 python3.9[108136]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771425517.1939569-439-46964867259071/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:38 np0005623263 python3.9[108213]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:38:38 np0005623263 systemd[1]: Reloading.
Feb 18 09:38:38 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:38:38 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:38:38 np0005623263 python3.9[108332]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:38 np0005623263 systemd[1]: Reloading.
Feb 18 09:38:39 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:38:39 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:38:39 np0005623263 systemd[1]: Starting ovn_metadata_agent container...
Feb 18 09:38:39 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:38:39 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5808bac3a8dce06e0f78f5db2f86b53f001be2064722033ca3d36c6cdaab8d7c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 18 09:38:39 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5808bac3a8dce06e0f78f5db2f86b53f001be2064722033ca3d36c6cdaab8d7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 09:38:39 np0005623263 systemd[1]: Started /usr/bin/podman healthcheck run 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.
Feb 18 09:38:39 np0005623263 podman[108380]: 2026-02-18 14:38:39.348315626 +0000 UTC m=+0.127521385 container init 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + sudo -E kolla_set_configs
Feb 18 09:38:39 np0005623263 podman[108380]: 2026-02-18 14:38:39.38062365 +0000 UTC m=+0.159829399 container start 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 18 09:38:39 np0005623263 edpm-start-podman-container[108380]: ovn_metadata_agent
Feb 18 09:38:39 np0005623263 edpm-start-podman-container[108379]: Creating additional drop-in dependency for "ovn_metadata_agent" (99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05)
Feb 18 09:38:39 np0005623263 systemd[1]: Reloading.
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Validating config file
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Copying service configuration files
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Writing out command to execute
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: ++ cat /run_command
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + CMD=neutron-ovn-metadata-agent
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + ARGS=
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + sudo kolla_copy_cacerts
Feb 18 09:38:39 np0005623263 podman[108402]: 2026-02-18 14:38:39.463574391 +0000 UTC m=+0.070635462 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + [[ ! -n '' ]]
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + . kolla_extend_start
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: Running command: 'neutron-ovn-metadata-agent'
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + umask 0022
Feb 18 09:38:39 np0005623263 ovn_metadata_agent[108395]: + exec neutron-ovn-metadata-agent
Feb 18 09:38:39 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:38:39 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:38:39 np0005623263 systemd[1]: Started ovn_metadata_agent container.
Feb 18 09:38:40 np0005623263 python3.9[108639]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 18 09:38:41 np0005623263 python3.9[108792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.353 108400 INFO neutron.common.config [-] Logging enabled!#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.354 108400 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.354 108400 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.354 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.354 108400 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.354 108400 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.355 108400 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.356 108400 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.357 108400 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.358 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.359 108400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.360 108400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.361 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.362 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.362 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.362 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.362 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.362 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.362 108400 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.362 108400 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.363 108400 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.364 108400 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.365 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.366 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.367 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.368 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.369 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.370 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.371 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.371 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.371 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.371 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.371 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.371 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.371 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.372 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.373 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.374 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.375 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.376 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.377 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.378 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.379 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.380 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.381 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.382 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.383 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.384 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.385 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.386 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.387 108400 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.397 108400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.397 108400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.397 108400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.398 108400 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.398 108400 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.410 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name bff0df27-aa33-4d98-b417-cc9248f7a486 (UUID: bff0df27-aa33-4d98-b417-cc9248f7a486) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.442 108400 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.443 108400 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.443 108400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.443 108400 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.446 108400 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.452 108400 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.457 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'bff0df27-aa33-4d98-b417-cc9248f7a486'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], external_ids={}, name=bff0df27-aa33-4d98-b417-cc9248f7a486, nb_cfg_timestamp=1771425478236, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.458 108400 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f039abbcd60>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.459 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.459 108400 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.459 108400 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.459 108400 INFO oslo_service.service [-] Starting 1 workers#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.463 108400 DEBUG oslo_service.service [-] Started child 108822 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.466 108400 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpi6qljtxp/privsep.sock']#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.466 108822 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-363262'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.484 108822 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.484 108822 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.485 108822 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.488 108822 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.494 108822 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb 18 09:38:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.499 108822 INFO eventlet.wsgi.server [-] (108822) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Feb 18 09:38:41 np0005623263 python3.9[108922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425520.8359408-484-8217250319101/.source.yaml _original_basename=.6uyjkdov follow=False checksum=0a65acefe0fbba5322f1fca09821fdc127185e3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:38:41 np0005623263 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:42.118 108400 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:42.119 108400 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpi6qljtxp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.963 108948 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.969 108948 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.971 108948 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:41.971 108948 INFO oslo.privsep.daemon [-] privsep daemon running as pid 108948#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:42.122 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[30bfe01e-4717-426c-8d73-10b87b5ae362]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 09:38:42 np0005623263 systemd[1]: session-21.scope: Deactivated successfully.
Feb 18 09:38:42 np0005623263 systemd[1]: session-21.scope: Consumed 30.006s CPU time.
Feb 18 09:38:42 np0005623263 systemd-logind[831]: Session 21 logged out. Waiting for processes to exit.
Feb 18 09:38:42 np0005623263 systemd-logind[831]: Removed session 21.
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:42.610 108948 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:42.610 108948 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:38:42 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:42.610 108948 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.146 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[1e305982-bc22-43c0-ad24-0234f7afe0d3]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.148 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, column=external_ids, values=({'neutron:ovn-metadata-id': 'ecd0df77-f8e1-5f57-b6ea-4689c608975c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.157 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.163 108400 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.164 108400 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.165 108400 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.165 108400 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.165 108400 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.165 108400 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.165 108400 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.165 108400 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.165 108400 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.166 108400 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.167 108400 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.167 108400 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.167 108400 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.167 108400 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.167 108400 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.167 108400 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.167 108400 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.168 108400 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.168 108400 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.168 108400 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.168 108400 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.168 108400 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.168 108400 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.168 108400 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.169 108400 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.170 108400 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.171 108400 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.172 108400 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.173 108400 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.174 108400 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.175 108400 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.176 108400 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.177 108400 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.178 108400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.179 108400 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.180 108400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.181 108400 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.182 108400 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.183 108400 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.184 108400 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.185 108400 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.186 108400 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.187 108400 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.188 108400 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.189 108400 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.190 108400 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.191 108400 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.192 108400 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.193 108400 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.193 108400 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.193 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.193 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.193 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.193 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.193 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.194 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.195 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.196 108400 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.197 108400 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.197 108400 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.197 108400 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:38:43 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:38:43.197 108400 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 18 09:38:48 np0005623263 systemd-logind[831]: New session 22 of user zuul.
Feb 18 09:38:48 np0005623263 systemd[1]: Started Session 22 of User zuul.
Feb 18 09:38:49 np0005623263 python3.9[109106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:38:50 np0005623263 podman[109262]: 2026-02-18 14:38:50.240892015 +0000 UTC m=+0.072809856 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:38:50 np0005623263 python3.9[109264]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:38:51 np0005623263 python3.9[109457]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:38:51 np0005623263 systemd[1]: Reloading.
Feb 18 09:38:51 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:38:51 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:38:52 np0005623263 python3.9[109649]: ansible-ansible.builtin.service_facts Invoked
Feb 18 09:38:52 np0005623263 network[109666]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 18 09:38:52 np0005623263 network[109667]: 'network-scripts' will be removed from distribution in near future.
Feb 18 09:38:52 np0005623263 network[109668]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 18 09:38:55 np0005623263 python3.9[109931]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:55 np0005623263 python3.9[110085]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:56 np0005623263 python3.9[110239]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:57 np0005623263 python3.9[110393]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:57 np0005623263 python3.9[110547]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:58 np0005623263 python3.9[110701]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:59 np0005623263 python3.9[110855]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:38:59 np0005623263 python3.9[111009]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:00 np0005623263 python3.9[111162]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:00 np0005623263 python3.9[111315]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:01 np0005623263 python3.9[111468]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:02 np0005623263 python3.9[111621]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:02 np0005623263 python3.9[111774]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:03 np0005623263 python3.9[111927]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:03 np0005623263 python3.9[112080]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:04 np0005623263 python3.9[112233]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:04 np0005623263 python3.9[112386]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:05 np0005623263 python3.9[112539]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:06 np0005623263 python3.9[112692]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:06 np0005623263 python3.9[112845]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:07 np0005623263 python3.9[112998]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:39:07 np0005623263 python3.9[113151]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:08 np0005623263 python3.9[113303]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 18 09:39:09 np0005623263 python3.9[113456]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:39:09 np0005623263 systemd[1]: Reloading.
Feb 18 09:39:09 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:39:09 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:39:09 np0005623263 podman[113576]: 2026-02-18 14:39:09.730619872 +0000 UTC m=+0.052995178 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 09:39:09 np0005623263 python3.9[113671]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:11 np0005623263 python3.9[113825]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:12 np0005623263 python3.9[113979]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:12 np0005623263 python3.9[114133]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:13 np0005623263 python3.9[114287]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:13 np0005623263 python3.9[114441]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:14 np0005623263 python3.9[114595]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:39:15 np0005623263 python3.9[114749]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 18 09:39:15 np0005623263 python3.9[114903]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 18 09:39:16 np0005623263 python3.9[115062]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 18 09:39:17 np0005623263 python3.9[115223]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:39:18 np0005623263 python3.9[115308]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:39:20 np0005623263 podman[115315]: 2026-02-18 14:39:20.774422239 +0000 UTC m=+0.071801278 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 09:39:40 np0005623263 podman[115527]: 2026-02-18 14:39:40.719811341 +0000 UTC m=+0.047359176 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 18 09:39:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:39:41.400 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:39:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:39:41.402 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:39:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:39:41.403 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:39:42 np0005623263 kernel: SELinux:  Converting 2767 SID table entries...
Feb 18 09:39:42 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:39:42 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:39:42 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:39:42 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:39:42 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:39:42 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:39:42 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:39:51 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Feb 18 09:39:51 np0005623263 podman[115555]: 2026-02-18 14:39:51.776815209 +0000 UTC m=+0.095884651 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 09:39:52 np0005623263 kernel: SELinux:  Converting 2767 SID table entries...
Feb 18 09:39:52 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:39:52 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:39:52 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:39:52 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:39:52 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:39:52 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:39:52 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:40:11 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 18 09:40:11 np0005623263 podman[122813]: 2026-02-18 14:40:11.803959922 +0000 UTC m=+0.090195711 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 18 09:40:22 np0005623263 podman[132491]: 2026-02-18 14:40:22.745831174 +0000 UTC m=+0.077566126 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 18 09:40:34 np0005623263 kernel: SELinux:  Converting 2768 SID table entries...
Feb 18 09:40:34 np0005623263 kernel: SELinux:  policy capability network_peer_controls=1
Feb 18 09:40:34 np0005623263 kernel: SELinux:  policy capability open_perms=1
Feb 18 09:40:34 np0005623263 kernel: SELinux:  policy capability extended_socket_class=1
Feb 18 09:40:34 np0005623263 kernel: SELinux:  policy capability always_check_network=0
Feb 18 09:40:34 np0005623263 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 18 09:40:34 np0005623263 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 18 09:40:34 np0005623263 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 18 09:40:36 np0005623263 dbus-broker-launch[819]: Noticed file-system modification, trigger reload.
Feb 18 09:40:36 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 18 09:40:36 np0005623263 dbus-broker-launch[819]: Noticed file-system modification, trigger reload.
Feb 18 09:40:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:40:41.402 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:40:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:40:41.410 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:40:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:40:41.411 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:40:42 np0005623263 podman[132787]: 2026-02-18 14:40:42.179820455 +0000 UTC m=+0.088894487 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 09:40:42 np0005623263 systemd[1]: Stopping OpenSSH server daemon...
Feb 18 09:40:42 np0005623263 systemd[1]: sshd.service: Deactivated successfully.
Feb 18 09:40:42 np0005623263 systemd[1]: Stopped OpenSSH server daemon.
Feb 18 09:40:42 np0005623263 systemd[1]: sshd.service: Consumed 4.232s CPU time, read 564.0K from disk, written 168.0K to disk.
Feb 18 09:40:42 np0005623263 systemd[1]: Stopped target sshd-keygen.target.
Feb 18 09:40:42 np0005623263 systemd[1]: Stopping sshd-keygen.target...
Feb 18 09:40:42 np0005623263 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 18 09:40:42 np0005623263 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 18 09:40:42 np0005623263 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 18 09:40:42 np0005623263 systemd[1]: Reached target sshd-keygen.target.
Feb 18 09:40:42 np0005623263 systemd[1]: Starting OpenSSH server daemon...
Feb 18 09:40:42 np0005623263 systemd[1]: Started OpenSSH server daemon.
Feb 18 09:40:44 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:40:44 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:40:44 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:44 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:44 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:44 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:40:47 np0005623263 python3.9[138809]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:40:47 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:47 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:47 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:48 np0005623263 python3.9[140394]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:40:48 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:48 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:48 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:49 np0005623263 python3.9[141876]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:40:49 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:49 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:49 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:49 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:40:49 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:40:49 np0005623263 systemd[1]: man-db-cache-update.service: Consumed 7.067s CPU time.
Feb 18 09:40:49 np0005623263 systemd[1]: run-refe7fbdb503f40f585d9ce145cfa9afe.service: Deactivated successfully.
Feb 18 09:40:50 np0005623263 python3.9[142728]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:40:50 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:50 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:50 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:51 np0005623263 python3.9[142925]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:51 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:51 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:51 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:52 np0005623263 python3.9[143122]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:52 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:52 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:52 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:53 np0005623263 podman[143291]: 2026-02-18 14:40:53.138859955 +0000 UTC m=+0.089277708 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 18 09:40:53 np0005623263 python3.9[143334]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:53 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:53 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:53 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:54 np0005623263 python3.9[143546]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:55 np0005623263 python3.9[143702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:55 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:55 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:55 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:56 np0005623263 python3.9[143900]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 18 09:40:56 np0005623263 systemd[1]: Reloading.
Feb 18 09:40:56 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:40:56 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:40:56 np0005623263 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 18 09:40:56 np0005623263 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 18 09:40:57 np0005623263 python3.9[144101]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:57 np0005623263 python3.9[144257]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:58 np0005623263 python3.9[144413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:59 np0005623263 python3.9[144569]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:40:59 np0005623263 python3.9[144725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:00 np0005623263 python3.9[144881]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:01 np0005623263 python3.9[145037]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:01 np0005623263 python3.9[145193]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:02 np0005623263 python3.9[145349]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:03 np0005623263 python3.9[145505]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:03 np0005623263 python3.9[145661]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:04 np0005623263 python3.9[145817]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:05 np0005623263 python3.9[145973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:05 np0005623263 python3.9[146129]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 18 09:41:06 np0005623263 python3.9[146285]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:41:07 np0005623263 python3.9[146438]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:41:07 np0005623263 python3.9[146591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:41:08 np0005623263 python3.9[146744]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:41:08 np0005623263 python3.9[146897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:41:09 np0005623263 python3.9[147050]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:41:10 np0005623263 python3.9[147200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:41:10 np0005623263 python3.9[147353]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:11 np0005623263 python3.9[147479]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425670.144278-557-105572547051176/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:12 np0005623263 python3.9[147632]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:12 np0005623263 podman[147635]: 2026-02-18 14:41:12.367007062 +0000 UTC m=+0.064515160 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 18 09:41:12 np0005623263 python3.9[147777]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425671.8401554-557-222275324919658/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:13 np0005623263 python3.9[147930]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:13 np0005623263 python3.9[148056]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425672.8417199-557-198237086158554/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:14 np0005623263 python3.9[148209]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:14 np0005623263 python3.9[148335]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425673.8185174-557-157436878512730/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:15 np0005623263 python3.9[148488]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:15 np0005623263 python3.9[148614]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425674.8562331-557-149732005214557/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:16 np0005623263 python3.9[148767]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:16 np0005623263 python3.9[148893]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425675.839145-557-117206642205261/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:17 np0005623263 python3.9[149046]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:17 np0005623263 python3.9[149172]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425676.8643105-557-319008360866/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:18 np0005623263 python3.9[149325]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:18 np0005623263 python3.9[149451]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771425677.9149964-557-118556242737990/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:19 np0005623263 python3.9[149604]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 18 09:41:20 np0005623263 python3.9[149758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:20 np0005623263 python3.9[149911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:21 np0005623263 python3.9[150064]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:21 np0005623263 python3.9[150217]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:22 np0005623263 python3.9[150370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:23 np0005623263 python3.9[150523]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:23 np0005623263 podman[150647]: 2026-02-18 14:41:23.405931378 +0000 UTC m=+0.072917544 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 18 09:41:23 np0005623263 python3.9[150693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:24 np0005623263 python3.9[150852]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:24 np0005623263 python3.9[151005]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:25 np0005623263 python3.9[151158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:25 np0005623263 python3.9[151311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:26 np0005623263 python3.9[151464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:26 np0005623263 python3.9[151617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:27 np0005623263 python3.9[151770]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:27 np0005623263 python3.9[151923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:28 np0005623263 python3.9[152047]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425687.4105928-778-70432426541505/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:28 np0005623263 python3.9[152200]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:29 np0005623263 python3.9[152324]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425688.4684143-778-176982364250178/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:29 np0005623263 python3.9[152477]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:30 np0005623263 python3.9[152601]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425689.4455507-778-258064770634622/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:30 np0005623263 python3.9[152754]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:31 np0005623263 python3.9[152878]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425690.4281523-778-66263982481631/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:31 np0005623263 python3.9[153031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:32 np0005623263 python3.9[153155]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425691.3977907-778-243326225934294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:32 np0005623263 python3.9[153308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:33 np0005623263 python3.9[153432]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425692.3481321-778-279243522999652/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:33 np0005623263 python3.9[153585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:34 np0005623263 python3.9[153709]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425693.3170595-778-244209598684048/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:34 np0005623263 python3.9[153862]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:35 np0005623263 python3.9[153986]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425694.2953932-778-159844653932514/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:35 np0005623263 python3.9[154139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:36 np0005623263 python3.9[154263]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425695.238414-778-199549110518926/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:36 np0005623263 python3.9[154416]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:37 np0005623263 python3.9[154540]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425696.2712483-778-17806487636618/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:37 np0005623263 python3.9[154693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:38 np0005623263 python3.9[154817]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425697.298777-778-167854038574533/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:38 np0005623263 python3.9[154970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:39 np0005623263 python3.9[155094]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425698.2847326-778-157701313744538/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:39 np0005623263 python3.9[155247]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:40 np0005623263 python3.9[155371]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425699.2978644-778-139054814594361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:40 np0005623263 python3.9[155524]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:41 np0005623263 python3.9[155648]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425700.3100886-778-250307942343678/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:41:41.405 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:41:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:41:41.414 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:41:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:41:41.415 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:41:41 np0005623263 python3.9[155798]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:41:42 np0005623263 podman[155925]: 2026-02-18 14:41:42.500662916 +0000 UTC m=+0.045794511 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 18 09:41:42 np0005623263 python3.9[155969]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 18 09:41:44 np0005623263 dbus-broker-launch[822]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 18 09:41:44 np0005623263 python3.9[156130]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:44 np0005623263 python3.9[156283]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:45 np0005623263 python3.9[156436]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:45 np0005623263 python3.9[156589]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:46 np0005623263 python3.9[156742]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:46 np0005623263 python3.9[156895]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:47 np0005623263 python3.9[157048]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:47 np0005623263 python3.9[157201]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:48 np0005623263 python3.9[157354]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:48 np0005623263 python3.9[157507]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:49 np0005623263 python3.9[157660]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:41:49 np0005623263 systemd[1]: Reloading.
Feb 18 09:41:49 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:41:49 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:41:50 np0005623263 systemd[1]: Starting libvirt logging daemon socket...
Feb 18 09:41:50 np0005623263 systemd[1]: Listening on libvirt logging daemon socket.
Feb 18 09:41:50 np0005623263 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 18 09:41:50 np0005623263 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 18 09:41:50 np0005623263 systemd[1]: Starting libvirt logging daemon...
Feb 18 09:41:50 np0005623263 systemd[1]: Started libvirt logging daemon.
Feb 18 09:41:50 np0005623263 python3.9[157861]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:41:50 np0005623263 systemd[1]: Reloading.
Feb 18 09:41:50 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:41:50 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:41:50 np0005623263 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 18 09:41:50 np0005623263 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 18 09:41:50 np0005623263 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 18 09:41:50 np0005623263 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 18 09:41:50 np0005623263 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 18 09:41:50 np0005623263 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 18 09:41:50 np0005623263 systemd[1]: Starting libvirt nodedev daemon...
Feb 18 09:41:51 np0005623263 systemd[1]: Started libvirt nodedev daemon.
Feb 18 09:41:51 np0005623263 python3.9[158084]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:41:51 np0005623263 systemd[1]: Reloading.
Feb 18 09:41:51 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:41:51 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:41:51 np0005623263 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 18 09:41:51 np0005623263 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 18 09:41:51 np0005623263 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 18 09:41:51 np0005623263 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 18 09:41:51 np0005623263 systemd[1]: Starting libvirt proxy daemon...
Feb 18 09:41:51 np0005623263 systemd[1]: Started libvirt proxy daemon.
Feb 18 09:41:52 np0005623263 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 18 09:41:52 np0005623263 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 18 09:41:52 np0005623263 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 18 09:41:52 np0005623263 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 18 09:41:52 np0005623263 python3.9[158304]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:41:52 np0005623263 systemd[1]: Reloading.
Feb 18 09:41:52 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:41:52 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:41:52 np0005623263 systemd[1]: Listening on libvirt locking daemon socket.
Feb 18 09:41:52 np0005623263 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 18 09:41:52 np0005623263 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 18 09:41:52 np0005623263 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 18 09:41:52 np0005623263 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 18 09:41:52 np0005623263 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 18 09:41:52 np0005623263 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 18 09:41:52 np0005623263 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 18 09:41:52 np0005623263 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 18 09:41:52 np0005623263 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 18 09:41:52 np0005623263 systemd[1]: Starting libvirt QEMU daemon...
Feb 18 09:41:52 np0005623263 systemd[1]: Started libvirt QEMU daemon.
Feb 18 09:41:53 np0005623263 setroubleshoot[158151]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f30f0fc6-ba6e-41e6-bea0-a4f0c636ea17
Feb 18 09:41:53 np0005623263 setroubleshoot[158151]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb 18 09:41:53 np0005623263 setroubleshoot[158151]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f30f0fc6-ba6e-41e6-bea0-a4f0c636ea17
Feb 18 09:41:53 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:41:53 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:41:53 np0005623263 setroubleshoot[158151]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb 18 09:41:53 np0005623263 python3.9[158537]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:41:53 np0005623263 systemd[1]: Reloading.
Feb 18 09:41:53 np0005623263 podman[158540]: 2026-02-18 14:41:53.756765769 +0000 UTC m=+0.090615856 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 18 09:41:53 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:41:53 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:41:53 np0005623263 systemd[1]: Starting libvirt secret daemon socket...
Feb 18 09:41:53 np0005623263 systemd[1]: Listening on libvirt secret daemon socket.
Feb 18 09:41:53 np0005623263 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 18 09:41:53 np0005623263 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 18 09:41:53 np0005623263 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 18 09:41:53 np0005623263 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 18 09:41:53 np0005623263 systemd[1]: Starting libvirt secret daemon...
Feb 18 09:41:54 np0005623263 systemd[1]: Started libvirt secret daemon.
Feb 18 09:41:54 np0005623263 python3.9[158783]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:55 np0005623263 python3.9[158936]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 18 09:41:56 np0005623263 python3.9[159089]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:56 np0005623263 python3.9[159213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425715.6783721-1123-10526961328/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:57 np0005623263 python3.9[159366]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:57 np0005623263 python3.9[159519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:58 np0005623263 python3.9[159598]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:58 np0005623263 python3.9[159751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:41:59 np0005623263 python3.9[159830]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._wb7iozo recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:41:59 np0005623263 python3.9[159983]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:00 np0005623263 python3.9[160062]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:00 np0005623263 python3.9[160215]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:01 np0005623263 python3[160369]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 18 09:42:02 np0005623263 python3.9[160522]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:03 np0005623263 python3.9[160601]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:03 np0005623263 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 18 09:42:03 np0005623263 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.054s CPU time.
Feb 18 09:42:03 np0005623263 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 18 09:42:04 np0005623263 python3.9[160754]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:04 np0005623263 python3.9[160880]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425723.6181374-1212-276877844308961/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:04 np0005623263 python3.9[161033]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:05 np0005623263 python3.9[161112]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:05 np0005623263 python3.9[161265]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:06 np0005623263 python3.9[161344]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:07 np0005623263 python3.9[161497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:07 np0005623263 python3.9[161623]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771425726.6052666-1251-184651833017084/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:08 np0005623263 python3.9[161776]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:08 np0005623263 python3.9[161929]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:09 np0005623263 python3.9[162085]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:10 np0005623263 python3.9[162238]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:10 np0005623263 python3.9[162392]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:42:11 np0005623263 python3.9[162547]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:11 np0005623263 python3.9[162705]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:12 np0005623263 python3.9[162858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:12 np0005623263 python3.9[162984]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425731.8347633-1323-129490235249532/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:12 np0005623263 podman[162985]: 2026-02-18 14:42:12.729739955 +0000 UTC m=+0.055855419 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:42:13 np0005623263 python3.9[163157]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:13 np0005623263 python3.9[163281]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425732.8223646-1338-105303199857849/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:14 np0005623263 python3.9[163434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:14 np0005623263 python3.9[163558]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425733.9198415-1353-35019934306682/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:15 np0005623263 python3.9[163711]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:42:15 np0005623263 systemd[1]: Reloading.
Feb 18 09:42:15 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:42:15 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:42:15 np0005623263 systemd[1]: Reached target edpm_libvirt.target.
Feb 18 09:42:16 np0005623263 python3.9[163910]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 18 09:42:16 np0005623263 systemd[1]: Reloading.
Feb 18 09:42:16 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:42:16 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:42:16 np0005623263 systemd[1]: Reloading.
Feb 18 09:42:16 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:42:16 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:42:17 np0005623263 systemd[1]: session-22.scope: Deactivated successfully.
Feb 18 09:42:17 np0005623263 systemd[1]: session-22.scope: Consumed 2min 49.859s CPU time.
Feb 18 09:42:17 np0005623263 systemd-logind[831]: Session 22 logged out. Waiting for processes to exit.
Feb 18 09:42:17 np0005623263 systemd-logind[831]: Removed session 22.
Feb 18 09:42:22 np0005623263 systemd-logind[831]: New session 23 of user zuul.
Feb 18 09:42:22 np0005623263 systemd[1]: Started Session 23 of User zuul.
Feb 18 09:42:23 np0005623263 python3.9[164175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:42:24 np0005623263 podman[164303]: 2026-02-18 14:42:24.309857849 +0000 UTC m=+0.078519194 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 18 09:42:24 np0005623263 python3.9[164339]: ansible-ansible.builtin.service_facts Invoked
Feb 18 09:42:24 np0005623263 network[164372]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 18 09:42:24 np0005623263 network[164373]: 'network-scripts' will be removed from distribution in near future.
Feb 18 09:42:24 np0005623263 network[164374]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 18 09:42:27 np0005623263 python3.9[164647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:42:28 np0005623263 python3.9[164732]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:42:34 np0005623263 python3.9[164886]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:42:35 np0005623263 python3.9[165039]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:36 np0005623263 python3.9[165193]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:42:36 np0005623263 python3.9[165346]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:37 np0005623263 python3.9[165500]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:37 np0005623263 python3.9[165624]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425756.9306698-90-238095359504441/.source.iscsi _original_basename=.4faipfqv follow=False checksum=0dc1439c4df74b840ada941eabffcb1ff2afca3c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:38 np0005623263 python3.9[165777]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:39 np0005623263 python3.9[165932]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:40 np0005623263 python3.9[166085]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:42:40 np0005623263 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 18 09:42:41 np0005623263 python3.9[166242]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:42:41 np0005623263 systemd[1]: Reloading.
Feb 18 09:42:41 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:42:41 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:42:41 np0005623263 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 18 09:42:41 np0005623263 systemd[1]: Starting Open-iSCSI...
Feb 18 09:42:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:42:41.407 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:42:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:42:41.414 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:42:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:42:41.415 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:42:41 np0005623263 kernel: Loading iSCSI transport class v2.0-870.
Feb 18 09:42:41 np0005623263 systemd[1]: Started Open-iSCSI.
Feb 18 09:42:41 np0005623263 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 18 09:42:41 np0005623263 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 18 09:42:42 np0005623263 python3.9[166447]: ansible-ansible.builtin.service_facts Invoked
Feb 18 09:42:42 np0005623263 network[166464]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 18 09:42:42 np0005623263 network[166465]: 'network-scripts' will be removed from distribution in near future.
Feb 18 09:42:42 np0005623263 network[166466]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 18 09:42:42 np0005623263 podman[166473]: 2026-02-18 14:42:42.845618826 +0000 UTC m=+0.059482468 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 18 09:42:45 np0005623263 python3.9[166757]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:42:47 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:42:47 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:42:47 np0005623263 systemd[1]: Reloading.
Feb 18 09:42:47 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:42:47 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:42:47 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:42:47 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:42:47 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:42:47 np0005623263 systemd[1]: run-r0607694f892842f982139b1ee0da0241.service: Deactivated successfully.
Feb 18 09:42:48 np0005623263 python3.9[167088]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 18 09:42:49 np0005623263 python3.9[167241]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 18 09:42:49 np0005623263 python3.9[167398]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:50 np0005623263 python3.9[167522]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425769.5391412-178-170182336169616/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:50 np0005623263 python3.9[167675]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:52 np0005623263 python3.9[167828]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:42:52 np0005623263 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 18 09:42:52 np0005623263 systemd[1]: Stopped Load Kernel Modules.
Feb 18 09:42:52 np0005623263 systemd[1]: Stopping Load Kernel Modules...
Feb 18 09:42:52 np0005623263 systemd[1]: Starting Load Kernel Modules...
Feb 18 09:42:52 np0005623263 systemd[1]: Finished Load Kernel Modules.
Feb 18 09:42:52 np0005623263 python3.9[167986]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:53 np0005623263 python3.9[168140]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:42:53 np0005623263 python3.9[168293]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:42:54 np0005623263 python3.9[168417]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425773.4549677-229-71419140258073/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:54 np0005623263 podman[168418]: 2026-02-18 14:42:54.518958978 +0000 UTC m=+0.117807106 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 18 09:42:54 np0005623263 python3.9[168596]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:42:55 np0005623263 python3.9[168750]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:56 np0005623263 python3.9[168903]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:56 np0005623263 python3.9[169056]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:57 np0005623263 python3.9[169209]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:57 np0005623263 python3.9[169362]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:58 np0005623263 python3.9[169515]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:58 np0005623263 python3.9[169668]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:42:59 np0005623263 python3.9[169821]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:42:59 np0005623263 python3.9[169976]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:00 np0005623263 python3.9[170130]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:00 np0005623263 systemd[1]: Listening on multipathd control socket.
Feb 18 09:43:01 np0005623263 python3.9[170287]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:01 np0005623263 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 18 09:43:01 np0005623263 udevadm[170292]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 18 09:43:01 np0005623263 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 18 09:43:01 np0005623263 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 18 09:43:01 np0005623263 multipathd[170295]: --------start up--------
Feb 18 09:43:01 np0005623263 multipathd[170295]: read /etc/multipath.conf
Feb 18 09:43:01 np0005623263 multipathd[170295]: path checkers start up
Feb 18 09:43:01 np0005623263 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 18 09:43:03 np0005623263 python3.9[170456]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 18 09:43:04 np0005623263 python3.9[170609]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 18 09:43:04 np0005623263 kernel: Key type psk registered
Feb 18 09:43:04 np0005623263 python3.9[170772]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:43:05 np0005623263 python3.9[170896]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425784.4312198-359-22165660828875/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:06 np0005623263 python3.9[171049]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:06 np0005623263 python3.9[171202]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:43:06 np0005623263 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 18 09:43:06 np0005623263 systemd[1]: Stopped Load Kernel Modules.
Feb 18 09:43:06 np0005623263 systemd[1]: Stopping Load Kernel Modules...
Feb 18 09:43:06 np0005623263 systemd[1]: Starting Load Kernel Modules...
Feb 18 09:43:06 np0005623263 systemd[1]: Finished Load Kernel Modules.
Feb 18 09:43:07 np0005623263 python3.9[171359]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:43:10 np0005623263 systemd[1]: Reloading.
Feb 18 09:43:10 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:43:10 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:43:10 np0005623263 systemd[1]: Reloading.
Feb 18 09:43:10 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:43:10 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:43:11 np0005623263 systemd-logind[831]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 18 09:43:11 np0005623263 systemd-logind[831]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 18 09:43:11 np0005623263 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 18 09:43:11 np0005623263 systemd[1]: Starting man-db-cache-update.service...
Feb 18 09:43:11 np0005623263 systemd[1]: Reloading.
Feb 18 09:43:11 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:43:11 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:43:11 np0005623263 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 18 09:43:12 np0005623263 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 18 09:43:12 np0005623263 systemd[1]: Finished man-db-cache-update.service.
Feb 18 09:43:12 np0005623263 systemd[1]: man-db-cache-update.service: Consumed 1.110s CPU time.
Feb 18 09:43:12 np0005623263 systemd[1]: run-r36c1ad8bba364096a2caef60444950eb.service: Deactivated successfully.
Feb 18 09:43:12 np0005623263 python3.9[172855]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:43:12 np0005623263 systemd[1]: Stopping Open-iSCSI...
Feb 18 09:43:12 np0005623263 iscsid[166289]: iscsid shutting down.
Feb 18 09:43:12 np0005623263 systemd[1]: iscsid.service: Deactivated successfully.
Feb 18 09:43:12 np0005623263 systemd[1]: Stopped Open-iSCSI.
Feb 18 09:43:12 np0005623263 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 18 09:43:12 np0005623263 systemd[1]: Starting Open-iSCSI...
Feb 18 09:43:12 np0005623263 systemd[1]: Started Open-iSCSI.
Feb 18 09:43:13 np0005623263 podman[172984]: 2026-02-18 14:43:13.132735374 +0000 UTC m=+0.071403586 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 09:43:13 np0005623263 python3.9[173032]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:43:13 np0005623263 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 18 09:43:13 np0005623263 multipathd[170295]: exit (signal)
Feb 18 09:43:13 np0005623263 multipathd[170295]: --------shut down-------
Feb 18 09:43:13 np0005623263 systemd[1]: multipathd.service: Deactivated successfully.
Feb 18 09:43:13 np0005623263 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 18 09:43:13 np0005623263 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 18 09:43:13 np0005623263 multipathd[173039]: --------start up--------
Feb 18 09:43:13 np0005623263 multipathd[173039]: read /etc/multipath.conf
Feb 18 09:43:13 np0005623263 multipathd[173039]: path checkers start up
Feb 18 09:43:13 np0005623263 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 18 09:43:14 np0005623263 python3.9[173197]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:43:15 np0005623263 python3.9[173354]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:15 np0005623263 python3.9[173507]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:43:15 np0005623263 systemd[1]: Reloading.
Feb 18 09:43:15 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:43:15 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:43:17 np0005623263 python3.9[173699]: ansible-ansible.builtin.service_facts Invoked
Feb 18 09:43:17 np0005623263 network[173716]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 18 09:43:17 np0005623263 network[173717]: 'network-scripts' will be removed from distribution in near future.
Feb 18 09:43:17 np0005623263 network[173718]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 18 09:43:20 np0005623263 python3.9[173992]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:20 np0005623263 python3.9[174146]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:21 np0005623263 python3.9[174300]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:22 np0005623263 python3.9[174454]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:22 np0005623263 python3.9[174608]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:23 np0005623263 python3.9[174762]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:24 np0005623263 python3.9[174916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:24 np0005623263 python3.9[175070]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:43:24 np0005623263 podman[175071]: 2026-02-18 14:43:24.768400992 +0000 UTC m=+0.093139439 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 18 09:43:25 np0005623263 python3.9[175250]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:25 np0005623263 python3.9[175403]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:26 np0005623263 python3.9[175556]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:26 np0005623263 python3.9[175709]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:27 np0005623263 python3.9[175862]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:27 np0005623263 python3.9[176015]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:28 np0005623263 python3.9[176168]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:28 np0005623263 python3.9[176321]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:29 np0005623263 python3.9[176474]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:30 np0005623263 python3.9[176627]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:30 np0005623263 python3.9[176780]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:31 np0005623263 python3.9[176933]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:31 np0005623263 python3.9[177086]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:32 np0005623263 python3.9[177239]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:32 np0005623263 python3.9[177392]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:33 np0005623263 python3.9[177545]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:43:33 np0005623263 python3.9[177698]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:34 np0005623263 python3.9[177850]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 18 09:43:35 np0005623263 python3.9[178003]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:43:35 np0005623263 systemd[1]: Reloading.
Feb 18 09:43:35 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:43:35 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:43:36 np0005623263 python3.9[178197]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:36 np0005623263 python3.9[178351]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:37 np0005623263 python3.9[178505]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:38 np0005623263 python3.9[178659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:38 np0005623263 python3.9[178813]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:39 np0005623263 python3.9[178967]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:39 np0005623263 python3.9[179121]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:40 np0005623263 python3.9[179275]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:43:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:43:41.407 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:43:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:43:41.409 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:43:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:43:41.409 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:43:41 np0005623263 python3.9[179429]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:42 np0005623263 python3.9[179582]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:42 np0005623263 python3.9[179735]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:43 np0005623263 podman[179859]: 2026-02-18 14:43:43.273824863 +0000 UTC m=+0.053125513 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 18 09:43:43 np0005623263 python3.9[179906]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:44 np0005623263 python3.9[180061]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:44 np0005623263 python3.9[180214]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:45 np0005623263 python3.9[180367]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:45 np0005623263 python3.9[180520]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:46 np0005623263 python3.9[180673]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:51 np0005623263 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 18 09:43:51 np0005623263 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 18 09:43:52 np0005623263 python3.9[180828]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 18 09:43:53 np0005623263 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 18 09:43:53 np0005623263 python3.9[180982]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 18 09:43:53 np0005623263 python3.9[181142]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 18 09:43:54 np0005623263 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 18 09:43:54 np0005623263 systemd-logind[831]: New session 24 of user zuul.
Feb 18 09:43:54 np0005623263 systemd[1]: Started Session 24 of User zuul.
Feb 18 09:43:55 np0005623263 podman[181178]: 2026-02-18 14:43:55.041531101 +0000 UTC m=+0.117323766 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 09:43:55 np0005623263 systemd[1]: session-24.scope: Deactivated successfully.
Feb 18 09:43:55 np0005623263 systemd-logind[831]: Session 24 logged out. Waiting for processes to exit.
Feb 18 09:43:55 np0005623263 systemd-logind[831]: Removed session 24.
Feb 18 09:43:55 np0005623263 python3.9[181355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:43:55 np0005623263 python3.9[181431]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:56 np0005623263 python3.9[181581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:43:57 np0005623263 python3.9[181702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425836.123329-999-35859116617287/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:57 np0005623263 python3.9[181852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:43:58 np0005623263 python3.9[181973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425837.2501116-999-242845714129663/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:58 np0005623263 python3.9[182123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:43:59 np0005623263 python3.9[182244]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425838.208902-999-280657956424458/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:43:59 np0005623263 python3.9[182394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:44:00 np0005623263 python3.9[182515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771425839.1881454-1053-111611806558202/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:44:00 np0005623263 python3.9[182668]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:01 np0005623263 python3.9[182821]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:01 np0005623263 python3.9[182974]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:02 np0005623263 python3.9[183127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:44:02 np0005623263 python3.9[183251]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1771425841.8879998-1092-58390892530476/.source _original_basename=.d2d9nbwn follow=False checksum=8bd651ba0a26ad9fa2d9009d4f15fa7e9546a3da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 18 09:44:03 np0005623263 python3.9[183403]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:04 np0005623263 python3.9[183558]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:04 np0005623263 python3.9[183711]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:44:05 np0005623263 python3.9[183861]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:07 np0005623263 python3.9[184285]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Feb 18 09:44:08 np0005623263 python3.9[184438]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 18 09:44:09 np0005623263 python3[184595]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Feb 18 09:44:09 np0005623263 podman[184633]: 2026-02-18 14:44:09.864316501 +0000 UTC m=+0.072797841 container create e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=nova_compute_init, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb 18 09:44:09 np0005623263 podman[184633]: 2026-02-18 14:44:09.812127524 +0000 UTC m=+0.020608914 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 18 09:44:09 np0005623263 python3[184595]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1 --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 18 09:44:10 np0005623263 python3.9[184825]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:11 np0005623263 python3.9[184977]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 18 09:44:12 np0005623263 python3.9[185130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:44:12 np0005623263 python3.9[185258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425851.7283092-1218-153617130449032/.source.yaml _original_basename=.mtcerjsu follow=False checksum=0239044fccd321a9e524d1ae44e76c2d4310cfcd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:13 np0005623263 python3.9[185411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:13 np0005623263 podman[185412]: 2026-02-18 14:44:13.442759432 +0000 UTC m=+0.052556587 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 18 09:44:13 np0005623263 python3.9[185584]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:44:14 np0005623263 python3.9[185737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:44:14 np0005623263 python3.9[185861]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425854.1155899-1251-99101232613251/.source.json _original_basename=.3ixqvi73 follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:15 np0005623263 python3.9[186011]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:17 np0005623263 python3.9[186437]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Feb 18 09:44:18 np0005623263 python3.9[186590]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 18 09:44:19 np0005623263 python3[186743]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 18 09:44:19 np0005623263 podman[186780]: 2026-02-18 14:44:19.33565556 +0000 UTC m=+0.058106154 container create dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=nova_compute, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute)
Feb 18 09:44:19 np0005623263 podman[186780]: 2026-02-18 14:44:19.304259332 +0000 UTC m=+0.026709936 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 18 09:44:19 np0005623263 python3[186743]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1 --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 18 09:44:20 np0005623263 python3.9[186971]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:20 np0005623263 python3.9[187126]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:21 np0005623263 python3.9[187203]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:21 np0005623263 python3.9[187355]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771425861.128226-1329-177669650055189/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:22 np0005623263 python3.9[187432]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:44:22 np0005623263 systemd[1]: Reloading.
Feb 18 09:44:22 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:44:22 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:44:22 np0005623263 python3.9[187551]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:44:22 np0005623263 systemd[1]: Reloading.
Feb 18 09:44:23 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:44:23 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:44:23 np0005623263 systemd[1]: Starting nova_compute container...
Feb 18 09:44:23 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:44:23 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:23 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:23 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:23 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:23 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:23 np0005623263 podman[187599]: 2026-02-18 14:44:23.399404553 +0000 UTC m=+0.105349150 container init dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute)
Feb 18 09:44:23 np0005623263 podman[187599]: 2026-02-18 14:44:23.407280251 +0000 UTC m=+0.113224848 container start dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 18 09:44:23 np0005623263 podman[187599]: nova_compute
Feb 18 09:44:23 np0005623263 systemd[1]: Started nova_compute container.
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + sudo -E kolla_set_configs
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Validating config file
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying service configuration files
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Deleting /etc/ceph
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Creating directory /etc/ceph
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /etc/ceph
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Writing out command to execute
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:23 np0005623263 nova_compute[187614]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 18 09:44:23 np0005623263 nova_compute[187614]: ++ cat /run_command
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + CMD=nova-compute
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + ARGS=
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + sudo kolla_copy_cacerts
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + [[ ! -n '' ]]
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + . kolla_extend_start
Feb 18 09:44:23 np0005623263 nova_compute[187614]: Running command: 'nova-compute'
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + echo 'Running command: '\''nova-compute'\'''
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + umask 0022
Feb 18 09:44:23 np0005623263 nova_compute[187614]: + exec nova-compute
Feb 18 09:44:24 np0005623263 python3.9[187775]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 18 09:44:24 np0005623263 python3.9[187929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:44:25 np0005623263 podman[188026]: 2026-02-18 14:44:25.373040088 +0000 UTC m=+0.098904180 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 18 09:44:25 np0005623263 python3.9[188067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425864.5237231-1374-266029735564694/.source.yaml _original_basename=.ab210omt follow=False checksum=8bcad11531251290a0143a13c3957df0c0c71643 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:44:25 np0005623263 nova_compute[187614]: 2026-02-18 14:44:25.545 187618 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 18 09:44:25 np0005623263 nova_compute[187614]: 2026-02-18 14:44:25.545 187618 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 18 09:44:25 np0005623263 nova_compute[187614]: 2026-02-18 14:44:25.545 187618 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 18 09:44:25 np0005623263 nova_compute[187614]: 2026-02-18 14:44:25.545 187618 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb 18 09:44:25 np0005623263 nova_compute[187614]: 2026-02-18 14:44:25.683 187618 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 09:44:25 np0005623263 nova_compute[187614]: 2026-02-18 14:44:25.694 187618 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 09:44:25 np0005623263 nova_compute[187614]: 2026-02-18 14:44:25.694 187618 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb 18 09:44:26 np0005623263 python3.9[188237]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.394 187618 INFO nova.virt.driver [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.510 187618 INFO nova.compute.provider_config [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.524 187618 DEBUG oslo_concurrency.lockutils [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.524 187618 DEBUG oslo_concurrency.lockutils [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.524 187618 DEBUG oslo_concurrency.lockutils [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.525 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.525 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.525 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.525 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.525 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.525 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.525 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.526 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.526 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.526 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.526 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.526 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.527 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.527 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.527 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.527 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.527 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.527 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.528 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.528 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.528 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.528 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.528 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.528 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.528 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.529 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.529 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.529 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.529 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.529 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.529 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.529 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.530 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.530 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.530 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.530 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.530 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.531 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.531 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.531 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.531 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.531 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.532 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.532 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.532 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.532 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.532 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.533 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.533 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.533 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.533 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.533 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.533 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.533 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.534 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.534 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.534 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.534 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.534 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.534 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.535 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.535 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.535 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.535 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.535 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.535 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.535 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.536 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.536 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.536 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.536 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.536 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.536 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.537 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.537 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.537 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.537 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.537 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.537 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.537 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.538 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.538 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.538 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.538 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.538 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.538 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.539 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.539 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.539 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.539 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.539 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.539 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.539 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.540 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.541 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.541 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.541 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.541 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.541 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.541 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.542 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.542 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.542 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.542 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.542 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.542 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.543 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.543 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.543 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.543 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.543 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.543 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.543 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.544 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.544 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.544 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.544 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.544 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.544 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.544 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.545 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.545 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.545 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.545 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.545 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.545 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.545 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.546 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.546 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.546 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.546 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.546 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.547 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.547 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.547 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.547 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.547 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.547 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.548 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.548 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.548 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.548 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.548 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.548 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.549 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.549 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.549 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.549 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.549 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.549 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.550 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.550 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.550 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.550 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.550 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.550 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.551 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.551 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.551 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.551 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.551 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.552 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.552 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.552 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.552 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.552 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.552 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.553 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.553 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.553 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.553 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.553 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.553 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.554 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.554 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.554 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.554 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.554 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.554 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.555 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.555 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.555 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.555 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.555 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.555 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.556 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.556 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.556 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.556 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.556 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.557 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.557 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.557 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.557 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.557 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.557 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.557 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.558 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.558 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.558 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.558 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.558 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.558 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.558 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.559 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.559 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.559 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.559 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.559 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.559 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.559 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.560 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.560 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.560 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.560 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.560 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.560 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.560 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.561 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.561 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.561 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.561 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.561 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.561 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.561 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.562 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.562 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.562 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.562 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.562 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.562 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.562 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.563 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.563 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.563 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.563 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.563 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.563 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.563 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.564 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.564 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.564 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.564 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.564 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.564 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.565 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.565 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.565 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.565 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.565 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.565 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.566 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.566 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.566 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.566 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.566 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.566 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.567 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.567 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.567 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.567 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.567 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.567 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.567 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.568 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.568 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.568 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.568 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.568 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.568 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.568 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.569 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.569 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.569 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.569 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.569 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.569 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.569 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.570 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.570 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.570 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.570 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.570 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.570 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.571 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.571 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.571 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.571 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.571 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.572 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.572 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.572 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.572 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.573 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.573 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.573 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.573 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.573 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.574 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.574 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.574 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.574 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.574 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.574 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.574 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.575 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.575 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.575 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.575 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.575 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.575 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.575 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.576 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.576 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.576 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.576 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.576 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.576 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.577 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.577 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.577 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.577 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.577 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.578 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.578 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.578 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.578 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.578 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.578 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.578 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.579 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.579 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.579 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.579 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.579 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.579 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.580 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.580 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.580 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.580 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.580 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.581 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.581 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.581 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.581 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.581 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.582 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.582 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.582 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.582 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.582 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.583 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.583 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.583 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.583 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.583 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.583 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.583 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.584 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.584 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.584 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.584 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.584 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.584 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.584 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.585 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.585 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.585 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.585 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.585 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.585 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.585 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.586 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.586 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.586 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.586 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.586 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.587 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.587 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.587 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.587 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.587 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.587 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.588 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.588 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.588 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.588 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.588 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.588 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.589 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.589 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.589 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.589 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.589 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.589 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.590 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.590 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.590 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.590 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.590 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.590 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.591 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.591 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.591 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.591 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.591 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.591 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.591 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.592 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.592 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.592 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.592 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.592 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.592 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.592 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.593 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.593 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.593 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.593 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.593 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.594 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.594 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.594 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.594 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.594 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.594 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.594 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.595 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.595 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.595 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.595 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.595 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.595 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.595 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.596 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.596 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.596 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.596 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.596 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.596 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.597 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.597 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.597 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.597 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.597 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.597 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.598 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.598 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.598 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.598 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.598 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.598 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.599 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.599 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.599 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.599 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.599 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.599 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.599 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.600 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.600 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.600 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.600 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.600 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.601 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.601 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.601 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.601 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.601 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.601 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.602 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.602 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.602 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.602 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.602 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.602 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.603 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.603 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.603 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.603 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.603 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.603 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.603 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.604 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.604 187618 WARNING oslo_config.cfg [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 18 09:44:26 np0005623263 nova_compute[187614]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 18 09:44:26 np0005623263 nova_compute[187614]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 18 09:44:26 np0005623263 nova_compute[187614]: and ``live_migration_inbound_addr`` respectively.
Feb 18 09:44:26 np0005623263 nova_compute[187614]: ).  Its value may be silently ignored in the future.#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.604 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.604 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.604 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.605 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.605 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.605 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.605 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.605 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.606 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.606 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.606 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.606 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.607 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.607 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.607 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.607 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.607 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.608 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.608 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.608 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.608 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.608 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.608 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.608 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.609 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.609 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.609 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.609 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.609 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.609 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.609 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.610 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.610 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.610 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.610 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.610 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.610 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.611 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.611 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.611 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.611 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.611 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.611 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.611 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.612 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.612 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.612 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.612 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.612 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.612 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.613 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.613 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.613 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.613 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.613 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.614 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.614 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.614 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.614 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.614 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.614 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.614 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.615 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.615 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.615 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.615 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.615 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.615 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.616 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.616 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.616 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.616 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.616 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.617 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.617 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.617 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.617 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.617 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.617 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.618 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.618 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.618 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.618 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.619 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.619 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.619 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.619 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.619 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.620 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.620 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.620 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.620 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.620 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.620 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.620 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.621 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.621 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.621 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.621 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.621 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.621 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.621 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.622 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.622 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.622 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.622 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.622 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.622 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.622 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.623 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.623 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.623 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.623 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.623 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.623 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.624 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.624 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.624 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.624 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.624 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.624 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.624 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.625 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.625 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.625 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.625 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.625 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.625 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.626 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.626 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.626 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.626 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.626 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.626 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.627 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.627 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.627 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.627 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.627 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.627 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.628 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.628 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.628 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.628 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.628 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.628 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.629 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.629 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.629 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.629 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.629 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.629 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.629 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.630 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.630 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.630 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.630 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.630 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.630 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.631 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.631 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.631 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.631 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.631 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.631 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.631 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.632 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.632 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.632 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.632 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.632 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.632 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.632 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.633 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.633 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.633 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.633 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.633 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.633 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.634 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.634 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.634 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.634 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.634 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.635 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.635 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.635 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.635 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.635 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.635 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.636 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.636 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.636 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.636 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.636 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.636 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.636 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.637 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.637 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.637 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.637 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.637 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.637 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.637 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.638 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.638 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.638 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.638 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.638 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.638 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.639 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.639 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.639 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.639 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.639 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.639 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.639 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.640 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.640 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.640 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.640 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.640 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.640 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.640 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.641 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.641 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.641 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.641 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.641 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.641 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.642 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.642 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.642 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.642 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.642 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.642 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.643 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.643 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.643 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.643 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.643 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.643 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.643 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.644 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.644 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.644 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.644 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.644 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.645 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.645 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.645 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.645 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.646 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.646 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.646 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.646 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.646 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.646 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.647 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.647 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.647 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.647 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.647 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.647 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.648 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.648 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.648 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.648 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.648 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.649 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.649 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.649 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.649 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.650 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.650 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.650 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.650 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.650 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.650 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.651 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.651 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.651 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.651 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.651 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.651 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.651 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.652 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.652 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.652 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.652 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.652 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.652 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.653 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.653 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.653 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.653 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.653 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.654 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.654 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.654 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.654 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.654 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.654 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.655 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.655 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.655 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.655 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.655 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.655 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.656 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.656 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.656 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.656 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.656 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.656 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.657 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.657 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.657 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.657 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.657 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.657 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.658 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.658 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.658 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.658 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.658 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.658 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.658 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.659 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.659 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.659 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.659 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.659 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.659 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.660 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.660 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.660 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.660 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.660 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.661 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.661 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.661 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.661 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.661 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.661 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.662 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.662 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.663 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.663 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.663 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.663 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.663 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.663 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.663 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.664 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.664 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.664 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.664 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.664 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.664 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.664 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.665 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.665 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.665 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.665 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.665 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.665 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.666 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.666 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.666 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.666 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.666 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.666 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.667 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.667 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.667 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.667 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.667 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.667 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.667 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.668 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.668 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.668 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.668 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.668 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.669 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.669 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.669 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.669 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.669 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.669 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.669 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.670 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.670 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.670 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.670 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.670 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.670 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.671 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.671 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.671 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.671 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.671 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.671 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.671 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.672 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.672 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.672 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.672 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.672 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.672 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.673 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.673 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.673 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.673 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.673 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.673 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.674 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.674 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.674 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.674 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.674 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.674 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.674 187618 DEBUG oslo_service.service [None req-080d477e-995f-4c5b-a47f-90cb20f943af - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.676 187618 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.691 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.692 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.692 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.692 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb 18 09:44:26 np0005623263 systemd[1]: Starting libvirt QEMU daemon...
Feb 18 09:44:26 np0005623263 systemd[1]: Started libvirt QEMU daemon.
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.755 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa2a4da1be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.757 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa2a4da1be0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.758 187618 INFO nova.virt.libvirt.driver [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.779 187618 WARNING nova.virt.libvirt.driver [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb 18 09:44:26 np0005623263 nova_compute[187614]: 2026-02-18 14:44:26.780 187618 DEBUG nova.virt.libvirt.volume.mount [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb 18 09:44:26 np0005623263 python3.9[188438]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.576 187618 INFO nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Libvirt host capabilities <capabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <host>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <uuid>022190eb-356a-449e-bedd-18333ca89982</uuid>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <arch>x86_64</arch>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model>EPYC-Rome-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <vendor>AMD</vendor>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <microcode version='16777317'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <signature family='23' model='49' stepping='0'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <maxphysaddr mode='emulate' bits='40'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='x2apic'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='tsc-deadline'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='osxsave'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='hypervisor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='tsc_adjust'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='spec-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='stibp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='arch-capabilities'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='cmp_legacy'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='topoext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='virt-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='lbrv'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='tsc-scale'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='vmcb-clean'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='pause-filter'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='pfthreshold'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='svme-addr-chk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='rdctl-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='skip-l1dfl-vmentry'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='mds-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature name='pschange-mc-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <pages unit='KiB' size='4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <pages unit='KiB' size='2048'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <pages unit='KiB' size='1048576'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <power_management>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <suspend_mem/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <suspend_disk/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <suspend_hybrid/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </power_management>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <iommu support='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <migration_features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <live/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <uri_transports>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <uri_transport>tcp</uri_transport>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <uri_transport>rdma</uri_transport>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </uri_transports>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </migration_features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <topology>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <cells num='1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <cell id='0'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          <memory unit='KiB'>7864284</memory>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          <pages unit='KiB' size='4'>1966071</pages>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          <pages unit='KiB' size='2048'>0</pages>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          <pages unit='KiB' size='1048576'>0</pages>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          <distances>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <sibling id='0' value='10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          </distances>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          <cpus num='8'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:          </cpus>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        </cell>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </cells>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </topology>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <cache>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </cache>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <secmodel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model>selinux</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <doi>0</doi>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </secmodel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <secmodel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model>dac</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <doi>0</doi>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </secmodel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </host>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <guest>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <os_type>hvm</os_type>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <arch name='i686'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <wordsize>32</wordsize>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <domain type='qemu'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <domain type='kvm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </arch>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <pae/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <nonpae/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <acpi default='on' toggle='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <apic default='on' toggle='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <cpuselection/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <deviceboot/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <disksnapshot default='on' toggle='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <externalSnapshot/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </guest>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <guest>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <os_type>hvm</os_type>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <arch name='x86_64'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <wordsize>64</wordsize>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <domain type='qemu'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <domain type='kvm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </arch>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <acpi default='on' toggle='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <apic default='on' toggle='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <cpuselection/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <deviceboot/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <disksnapshot default='on' toggle='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <externalSnapshot/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </guest>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 
Feb 18 09:44:27 np0005623263 nova_compute[187614]: </capabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: #033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.587 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.608 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 18 09:44:27 np0005623263 nova_compute[187614]: <domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <domain>kvm</domain>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <arch>i686</arch>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <vcpu max='4096'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <iothreads supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <os supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='firmware'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <loader supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>rom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pflash</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='readonly'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>yes</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='secure'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </loader>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </os>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='maximumMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <vendor>AMD</vendor>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='succor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='custom' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Dhyana-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 python3.9[188597]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v6'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v7'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <memoryBacking supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='sourceType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>anonymous</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>memfd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </memoryBacking>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <disk supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='diskDevice'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>disk</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cdrom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>floppy</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>lun</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>fdc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>sata</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </disk>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <graphics supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vnc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egl-headless</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </graphics>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <video supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='modelType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vga</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cirrus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>none</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>bochs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ramfb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </video>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hostdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='mode'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>subsystem</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='startupPolicy'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>mandatory</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>requisite</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>optional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='subsysType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pci</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='capsType'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='pciBackend'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hostdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <rng supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>random</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </rng>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <filesystem supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='driverType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>path</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>handle</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtiofs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </filesystem>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tpm supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-tis</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-crb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emulator</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>external</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendVersion'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>2.0</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </tpm>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <redirdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </redirdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <channel supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </channel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <crypto supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </crypto>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <interface supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>passt</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </interface>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <panic supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>isa</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>hyperv</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </panic>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <console supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>null</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dev</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pipe</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stdio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>udp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tcp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu-vdagent</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </console>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <gic supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <vmcoreinfo supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <genid supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backingStoreInput supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backup supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <async-teardown supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <s390-pv supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <ps2 supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tdx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sev supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sgx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hyperv supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='features'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>relaxed</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vapic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>spinlocks</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vpindex</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>runtime</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>synic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stimer</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reset</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vendor_id</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>frequencies</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reenlightenment</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tlbflush</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ipi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>avic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emsr_bitmap</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>xmm_input</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <spinlocks>4095</spinlocks>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <stimer_direct>on</stimer_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_direct>on</tlbflush_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_extended>on</tlbflush_extended>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hyperv>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <launchSecurity supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: </domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.615 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 18 09:44:27 np0005623263 nova_compute[187614]: <domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <domain>kvm</domain>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <arch>i686</arch>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <vcpu max='240'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <iothreads supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <os supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='firmware'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <loader supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>rom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pflash</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='readonly'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>yes</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='secure'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </loader>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </os>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='maximumMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <vendor>AMD</vendor>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='succor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='custom' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Dhyana-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v6'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v7'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <memoryBacking supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='sourceType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>anonymous</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>memfd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </memoryBacking>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <disk supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='diskDevice'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>disk</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cdrom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>floppy</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>lun</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ide</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>fdc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>sata</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </disk>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <graphics supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vnc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egl-headless</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </graphics>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <video supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='modelType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vga</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cirrus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>none</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>bochs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ramfb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </video>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hostdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='mode'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>subsystem</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='startupPolicy'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>mandatory</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>requisite</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>optional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='subsysType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pci</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='capsType'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='pciBackend'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hostdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <rng supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>random</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </rng>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <filesystem supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='driverType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>path</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>handle</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtiofs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </filesystem>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tpm supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-tis</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-crb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emulator</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>external</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendVersion'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>2.0</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </tpm>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <redirdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </redirdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <channel supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </channel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <crypto supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </crypto>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <interface supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>passt</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </interface>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <panic supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>isa</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>hyperv</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </panic>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <console supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>null</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dev</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pipe</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stdio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>udp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tcp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu-vdagent</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </console>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <gic supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <vmcoreinfo supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <genid supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backingStoreInput supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backup supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <async-teardown supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <s390-pv supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <ps2 supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tdx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sev supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sgx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hyperv supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='features'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>relaxed</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vapic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>spinlocks</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vpindex</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>runtime</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>synic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stimer</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reset</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vendor_id</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>frequencies</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reenlightenment</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tlbflush</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ipi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>avic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emsr_bitmap</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>xmm_input</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <spinlocks>4095</spinlocks>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <stimer_direct>on</stimer_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_direct>on</tlbflush_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_extended>on</tlbflush_extended>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hyperv>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <launchSecurity supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: </domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.677 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.682 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 18 09:44:27 np0005623263 nova_compute[187614]: <domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <domain>kvm</domain>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <arch>x86_64</arch>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <vcpu max='4096'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <iothreads supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <os supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='firmware'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>efi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <loader supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>rom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pflash</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='readonly'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>yes</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='secure'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>yes</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </loader>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </os>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='maximumMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <vendor>AMD</vendor>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='succor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='custom' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Dhyana-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v6'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v7'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <memoryBacking supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='sourceType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>anonymous</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>memfd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </memoryBacking>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <disk supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='diskDevice'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>disk</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cdrom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>floppy</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>lun</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>fdc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>sata</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </disk>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <graphics supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vnc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egl-headless</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </graphics>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <video supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='modelType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vga</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cirrus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>none</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>bochs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ramfb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </video>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hostdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='mode'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>subsystem</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='startupPolicy'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>mandatory</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>requisite</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>optional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='subsysType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pci</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='capsType'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='pciBackend'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hostdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <rng supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>random</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </rng>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <filesystem supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='driverType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>path</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>handle</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtiofs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </filesystem>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tpm supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-tis</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-crb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emulator</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>external</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendVersion'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>2.0</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </tpm>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <redirdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </redirdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <channel supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </channel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <crypto supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </crypto>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <interface supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>passt</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </interface>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <panic supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>isa</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>hyperv</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </panic>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <console supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>null</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dev</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pipe</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stdio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>udp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tcp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu-vdagent</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </console>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <gic supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <vmcoreinfo supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <genid supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backingStoreInput supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backup supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <async-teardown supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <s390-pv supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <ps2 supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tdx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sev supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sgx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hyperv supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='features'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>relaxed</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vapic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>spinlocks</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vpindex</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>runtime</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>synic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stimer</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reset</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vendor_id</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>frequencies</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reenlightenment</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tlbflush</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ipi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>avic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emsr_bitmap</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>xmm_input</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <spinlocks>4095</spinlocks>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <stimer_direct>on</stimer_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_direct>on</tlbflush_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_extended>on</tlbflush_extended>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hyperv>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <launchSecurity supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: </domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.789 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 18 09:44:27 np0005623263 nova_compute[187614]: <domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <domain>kvm</domain>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <arch>x86_64</arch>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <vcpu max='240'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <iothreads supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <os supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='firmware'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <loader supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>rom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pflash</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='readonly'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>yes</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='secure'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>no</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </loader>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </os>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='maximumMigratable'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>on</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>off</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <vendor>AMD</vendor>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='succor'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <mode name='custom' supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cascadelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='ClearwaterForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ddpd-u'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sha512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm3'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sm4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Cooperlake-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Denverton-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Dhyana-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Genoa-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Milan-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Rome-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-Turin-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amd-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='auto-ibrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='perfmon-v2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbpb'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='stibp-always-on'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='EPYC-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='GraniteRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-128'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-256'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx10-512'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='prefetchiti'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Haswell-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-noTSX'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v6'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Icelake-Server-v7'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='IvyBridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='KnightsMill-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512er'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512pf'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G4-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Opteron_G5-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fma4'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tbm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xop'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SapphireRapids-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='amx-tile'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-bf16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-fp16'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bitalg'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrc'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fzrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='la57'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='taa-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='SierraForest-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ifma'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cmpccxadd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fbsdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='fsrs'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ibrs-all'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='intel-psfd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='lam'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mcdt-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pbrsb-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='psdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='serialize'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vaes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Client-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='hle'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='rtm'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Skylake-Server-v5'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512bw'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512cd'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512dq'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512f'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='avx512vl'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='invpcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pcid'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='pku'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='mpx'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v2'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v3'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='core-capability'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='split-lock-detect'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='Snowridge-v4'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='cldemote'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='erms'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='gfni'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdir64b'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='movdiri'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='xsaves'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='athlon-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='core2duo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='coreduo-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='n270-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='ss'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <blockers model='phenom-v1'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnow'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <feature name='3dnowext'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </blockers>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </mode>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </cpu>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <memoryBacking supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <enum name='sourceType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>anonymous</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <value>memfd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </memoryBacking>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <disk supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='diskDevice'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>disk</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cdrom</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>floppy</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>lun</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ide</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>fdc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>sata</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </disk>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <graphics supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vnc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egl-headless</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </graphics>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <video supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='modelType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vga</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>cirrus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>none</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>bochs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ramfb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </video>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hostdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='mode'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>subsystem</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='startupPolicy'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>mandatory</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>requisite</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>optional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='subsysType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pci</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>scsi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='capsType'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='pciBackend'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hostdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <rng supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtio-non-transitional</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>random</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>egd</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </rng>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <filesystem supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='driverType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>path</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>handle</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>virtiofs</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </filesystem>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tpm supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-tis</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tpm-crb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emulator</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>external</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendVersion'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>2.0</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </tpm>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <redirdev supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='bus'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>usb</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </redirdev>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <channel supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </channel>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <crypto supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendModel'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>builtin</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </crypto>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <interface supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='backendType'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>default</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>passt</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </interface>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <panic supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='model'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>isa</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>hyperv</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </panic>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <console supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='type'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>null</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vc</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pty</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dev</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>file</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>pipe</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stdio</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>udp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tcp</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>unix</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>qemu-vdagent</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>dbus</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </console>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </devices>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  <features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <gic supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <vmcoreinfo supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <genid supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backingStoreInput supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <backup supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <async-teardown supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <s390-pv supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <ps2 supported='yes'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <tdx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sev supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <sgx supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <hyperv supported='yes'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <enum name='features'>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>relaxed</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vapic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>spinlocks</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vpindex</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>runtime</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>synic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>stimer</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reset</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>vendor_id</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>frequencies</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>reenlightenment</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>tlbflush</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>ipi</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>avic</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>emsr_bitmap</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <value>xmm_input</value>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </enum>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      <defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <spinlocks>4095</spinlocks>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <stimer_direct>on</stimer_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_direct>on</tlbflush_direct>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <tlbflush_extended>on</tlbflush_extended>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:      </defaults>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    </hyperv>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:    <launchSecurity supported='no'/>
Feb 18 09:44:27 np0005623263 nova_compute[187614]:  </features>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: </domainCapabilities>
Feb 18 09:44:27 np0005623263 nova_compute[187614]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.882 187618 DEBUG nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.882 187618 INFO nova.virt.libvirt.host [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Secure Boot support detected#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.885 187618 INFO nova.virt.libvirt.driver [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.885 187618 INFO nova.virt.libvirt.driver [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.894 187618 DEBUG nova.virt.libvirt.driver [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.941 187618 INFO nova.virt.node [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Determined node identity 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from /var/lib/nova/compute_id#033[00m
Feb 18 09:44:27 np0005623263 nova_compute[187614]: 2026-02-18 14:44:27.969 187618 WARNING nova.compute.manager [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Compute nodes ['7d5f91f3-cf81-4de6-86b4-ce92bbe09380'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.013 187618 INFO nova.compute.manager [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.061 187618 WARNING nova.compute.manager [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.061 187618 DEBUG oslo_concurrency.lockutils [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.062 187618 DEBUG oslo_concurrency.lockutils [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.062 187618 DEBUG oslo_concurrency.lockutils [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.062 187618 DEBUG nova.compute.resource_tracker [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 09:44:28 np0005623263 systemd[1]: Starting libvirt nodedev daemon...
Feb 18 09:44:28 np0005623263 systemd[1]: Started libvirt nodedev daemon.
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.325 187618 WARNING nova.virt.libvirt.driver [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.327 187618 DEBUG nova.compute.resource_tracker [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6008MB free_disk=72.49908447265625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.327 187618 DEBUG oslo_concurrency.lockutils [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.327 187618 DEBUG oslo_concurrency.lockutils [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.348 187618 WARNING nova.compute.resource_tracker [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] No compute node record for compute-0.ctlplane.example.com:7d5f91f3-cf81-4de6-86b4-ce92bbe09380: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 could not be found.#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.377 187618 INFO nova.compute.resource_tracker [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.453 187618 DEBUG nova.compute.resource_tracker [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 09:44:28 np0005623263 nova_compute[187614]: 2026-02-18 14:44:28.454 187618 DEBUG nova.compute.resource_tracker [None req-61d20043-0268-405d-b9a1-30ed267282c0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 09:44:28 np0005623263 python3.9[188777]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 18 09:44:28 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:44:28 np0005623263 rsyslogd[1015]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 09:44:29 np0005623263 python3.9[188954]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:44:29 np0005623263 systemd[1]: Stopping nova_compute container...
Feb 18 09:44:29 np0005623263 virtqemud[188343]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 18 09:44:29 np0005623263 virtqemud[188343]: hostname: compute-0
Feb 18 09:44:29 np0005623263 virtqemud[188343]: End of file while reading data: Input/output error
Feb 18 09:44:29 np0005623263 systemd[1]: libpod-dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839.scope: Deactivated successfully.
Feb 18 09:44:29 np0005623263 systemd[1]: libpod-dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839.scope: Consumed 2.852s CPU time.
Feb 18 09:44:29 np0005623263 podman[188958]: 2026-02-18 14:44:29.555177055 +0000 UTC m=+0.059104005 container died dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=nova_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3)
Feb 18 09:44:29 np0005623263 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839-userdata-shm.mount: Deactivated successfully.
Feb 18 09:44:29 np0005623263 systemd[1]: var-lib-containers-storage-overlay-9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19-merged.mount: Deactivated successfully.
Feb 18 09:44:29 np0005623263 podman[188958]: 2026-02-18 14:44:29.630677894 +0000 UTC m=+0.134604804 container cleanup dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3)
Feb 18 09:44:29 np0005623263 podman[188958]: nova_compute
Feb 18 09:44:29 np0005623263 podman[188986]: nova_compute
Feb 18 09:44:29 np0005623263 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 18 09:44:29 np0005623263 systemd[1]: Stopped nova_compute container.
Feb 18 09:44:29 np0005623263 systemd[1]: Starting nova_compute container...
Feb 18 09:44:29 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:44:29 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:29 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:29 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:29 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:29 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d79250798c9415e005bbd31e7dbf7a4faee73069b53f8bbd4e71d27dfdf5a19/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:29 np0005623263 podman[188999]: 2026-02-18 14:44:29.789229932 +0000 UTC m=+0.077961278 container init dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 18 09:44:29 np0005623263 podman[188999]: 2026-02-18 14:44:29.794917471 +0000 UTC m=+0.083648797 container start dc944fe5b28e0ade279912f2daa23662b1b0cd2ee34e4b2124339e4fe1ea8839 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855-587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.build-date=20260127)
Feb 18 09:44:29 np0005623263 podman[188999]: nova_compute
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + sudo -E kolla_set_configs
Feb 18 09:44:29 np0005623263 systemd[1]: Started nova_compute container.
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Validating config file
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying service configuration files
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /etc/ceph
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Creating directory /etc/ceph
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /etc/ceph
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Writing out command to execute
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:29 np0005623263 nova_compute[189016]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 18 09:44:29 np0005623263 nova_compute[189016]: ++ cat /run_command
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + CMD=nova-compute
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + ARGS=
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + sudo kolla_copy_cacerts
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + [[ ! -n '' ]]
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + . kolla_extend_start
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + echo 'Running command: '\''nova-compute'\'''
Feb 18 09:44:29 np0005623263 nova_compute[189016]: Running command: 'nova-compute'
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + umask 0022
Feb 18 09:44:29 np0005623263 nova_compute[189016]: + exec nova-compute
Feb 18 09:44:30 np0005623263 python3.9[189180]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 18 09:44:30 np0005623263 systemd[1]: Started libpod-conmon-e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810.scope.
Feb 18 09:44:30 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:44:30 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b02af7f82d55448d54bd3a4fcc030f4af5848ff32c8369727f227c185a0440/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:30 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b02af7f82d55448d54bd3a4fcc030f4af5848ff32c8369727f227c185a0440/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:30 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b02af7f82d55448d54bd3a4fcc030f4af5848ff32c8369727f227c185a0440/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 18 09:44:30 np0005623263 podman[189204]: 2026-02-18 14:44:30.648018541 +0000 UTC m=+0.108788455 container init e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 18 09:44:30 np0005623263 podman[189204]: 2026-02-18 14:44:30.65443816 +0000 UTC m=+0.115208054 container start e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 18 09:44:30 np0005623263 python3.9[189180]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Applying nova statedir ownership
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 18 09:44:30 np0005623263 nova_compute_init[189227]: INFO:nova_statedir:Nova statedir ownership complete
Feb 18 09:44:30 np0005623263 systemd[1]: libpod-e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810.scope: Deactivated successfully.
Feb 18 09:44:30 np0005623263 podman[189242]: 2026-02-18 14:44:30.741562643 +0000 UTC m=+0.028425592 container died e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=nova_compute_init)
Feb 18 09:44:30 np0005623263 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810-userdata-shm.mount: Deactivated successfully.
Feb 18 09:44:30 np0005623263 systemd[1]: var-lib-containers-storage-overlay-c0b02af7f82d55448d54bd3a4fcc030f4af5848ff32c8369727f227c185a0440-merged.mount: Deactivated successfully.
Feb 18 09:44:30 np0005623263 podman[189242]: 2026-02-18 14:44:30.7763656 +0000 UTC m=+0.063228539 container cleanup e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '587c945a7cd9eecda0621b518ac16d593aac16e2e28b5d93cdff8b69b5d209f1'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 09:44:30 np0005623263 systemd[1]: libpod-conmon-e36d3bca34579b583506431d0c9142fd9b21d30d6b408abf1b5971a8d9454810.scope: Deactivated successfully.
Feb 18 09:44:31 np0005623263 systemd[1]: session-23.scope: Deactivated successfully.
Feb 18 09:44:31 np0005623263 systemd[1]: session-23.scope: Consumed 1min 25.669s CPU time.
Feb 18 09:44:31 np0005623263 systemd-logind[831]: Session 23 logged out. Waiting for processes to exit.
Feb 18 09:44:31 np0005623263 systemd-logind[831]: Removed session 23.
Feb 18 09:44:31 np0005623263 nova_compute[189016]: 2026-02-18 14:44:31.850 189020 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 18 09:44:31 np0005623263 nova_compute[189016]: 2026-02-18 14:44:31.850 189020 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 18 09:44:31 np0005623263 nova_compute[189016]: 2026-02-18 14:44:31.850 189020 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 18 09:44:31 np0005623263 nova_compute[189016]: 2026-02-18 14:44:31.851 189020 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.027 189020 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.038 189020 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.039 189020 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.465 189020 INFO nova.virt.driver [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.570 189020 INFO nova.compute.provider_config [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.590 189020 DEBUG oslo_concurrency.lockutils [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.590 189020 DEBUG oslo_concurrency.lockutils [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.590 189020 DEBUG oslo_concurrency.lockutils [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.591 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.591 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.591 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.591 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.591 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.592 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.592 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.592 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.592 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.592 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.592 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.593 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.593 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.593 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.593 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.593 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.594 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.594 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.594 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.594 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.594 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.594 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.594 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.595 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.595 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.595 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.595 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.595 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.595 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.595 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.596 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.596 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.596 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.596 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.596 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.596 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.596 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.597 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.597 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.597 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.597 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.597 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.597 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.598 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.598 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.598 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.598 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.598 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.598 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.599 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.600 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.600 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.600 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.600 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.600 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.600 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.600 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.601 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.601 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.601 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.601 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.601 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.601 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.601 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.602 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.602 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.602 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.602 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.602 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.602 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.602 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.603 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.604 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.604 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.604 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.604 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.604 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.604 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.604 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.605 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.606 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.607 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.607 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.607 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.607 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.607 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.607 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.607 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.608 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.609 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.610 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.611 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.611 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.611 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.611 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.611 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.611 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.611 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.612 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.612 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.612 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.612 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.612 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.612 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.612 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.613 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.613 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.613 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.613 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.613 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.613 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.613 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.614 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.614 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.614 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.614 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.614 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.614 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.614 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.615 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.615 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.615 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.615 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.615 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.615 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.615 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.616 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.616 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.616 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.616 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.616 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.616 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.616 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.617 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.617 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.617 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.617 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.617 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.617 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.617 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.618 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.619 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.619 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.619 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.619 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.619 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.619 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.619 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.620 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.620 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.620 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.620 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.620 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.620 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.620 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.621 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.621 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.621 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.621 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.621 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.621 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.621 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.622 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.623 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.623 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.623 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.623 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.623 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.623 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.623 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.624 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.624 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.624 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.624 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.624 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.624 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.625 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.625 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.625 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.625 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.625 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.625 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.625 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.626 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.627 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.627 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.627 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.627 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.627 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.627 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.627 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.628 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.628 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.628 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.628 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.628 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.628 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.629 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.630 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.630 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.630 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.630 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.630 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.630 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.630 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.631 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.631 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.631 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.631 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.631 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.631 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.631 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.632 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.632 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.632 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.632 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.632 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.632 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.632 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.633 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.633 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.633 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.633 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.633 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.633 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.633 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.634 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.634 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.634 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.634 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.634 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.634 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.634 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.635 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.635 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.635 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.635 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.635 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.635 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.635 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.636 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.636 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.636 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.636 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.636 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.636 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.636 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.637 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.638 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.638 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.638 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.638 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.638 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.638 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.638 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.639 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.640 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.640 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.640 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.640 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.640 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.640 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.640 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.641 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.641 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.641 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.641 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.641 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.642 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.643 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.643 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.643 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.643 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.643 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.643 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.643 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.644 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.644 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.644 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.644 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.644 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.644 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.645 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.645 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.645 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.645 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.645 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.645 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.645 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.646 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.646 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.646 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.646 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.646 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.646 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.647 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.648 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.648 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.648 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.648 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.648 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.648 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.648 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.649 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.649 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.649 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.649 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.649 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.649 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.649 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.650 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.650 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.650 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.650 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.650 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.650 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.650 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.651 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.651 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.651 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.651 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.651 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.651 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.651 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.652 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.653 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.653 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.653 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.653 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.653 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.653 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.653 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.654 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.655 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.655 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.655 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.655 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.655 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.655 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.656 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.656 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.656 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.656 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.656 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.656 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.657 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.657 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.657 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.657 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.657 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.657 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.658 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.658 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.658 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.658 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.658 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.658 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.659 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.659 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.659 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.659 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.659 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.659 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.659 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.660 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.660 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.660 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.660 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.660 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.661 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.661 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.661 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.661 189020 WARNING oslo_config.cfg [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 18 09:44:32 np0005623263 nova_compute[189016]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 18 09:44:32 np0005623263 nova_compute[189016]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 18 09:44:32 np0005623263 nova_compute[189016]: and ``live_migration_inbound_addr`` respectively.
Feb 18 09:44:32 np0005623263 nova_compute[189016]: ).  Its value may be silently ignored in the future.#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.662 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.662 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.662 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.662 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.662 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.662 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.663 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.663 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.663 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.663 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.663 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.663 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.664 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.664 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.664 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.664 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.664 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.665 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.665 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.665 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.665 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.665 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.665 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.665 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.666 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.666 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.666 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.666 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.666 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.666 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.666 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.667 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.667 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.667 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.667 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.667 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.667 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.668 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.668 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.668 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.668 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.668 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.668 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.668 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.669 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.669 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.669 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.669 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.669 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.669 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.669 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.670 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.671 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.671 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.671 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.671 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.671 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.671 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.671 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.672 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.673 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.673 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.673 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.673 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.673 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.673 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.673 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.674 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.674 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.674 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.674 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.674 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.674 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.674 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.675 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.675 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.675 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.675 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.675 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.675 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.675 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.676 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.677 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.677 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.677 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.677 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.677 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.677 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.677 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.678 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.678 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.678 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.678 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.678 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.678 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.678 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.679 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.679 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.679 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.679 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.679 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.679 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.679 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.680 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.680 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.680 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.680 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.680 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.680 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.681 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.682 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.682 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.682 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.682 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.682 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.682 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.683 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.683 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.683 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.683 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.683 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.683 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.683 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.684 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.684 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.684 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.684 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.684 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.684 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.685 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.685 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.685 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.685 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.685 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.685 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.685 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.686 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.686 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.686 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.686 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.686 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.686 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.686 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.687 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.687 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.687 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.687 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.687 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.687 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.687 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.688 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.688 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.688 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.688 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.688 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.689 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.689 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.689 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.689 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.689 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.689 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.689 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.690 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.690 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.690 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.690 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.690 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.690 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.690 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.691 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.691 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.691 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.691 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.691 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.691 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.691 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.692 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.692 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.692 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.692 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.692 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.692 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.692 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.693 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.694 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.694 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.694 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.694 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.694 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.694 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.694 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.695 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.695 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.695 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.695 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.695 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.695 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.695 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.696 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.696 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.696 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.696 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.696 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.696 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.696 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.697 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.697 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.697 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.697 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.697 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.697 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.698 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.698 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.698 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.698 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.698 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.699 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.699 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.699 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.699 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.699 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.699 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.700 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.701 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.701 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.701 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.701 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.701 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.701 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.701 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.702 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.703 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.703 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.703 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.703 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.703 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.703 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.703 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.704 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.704 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.704 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.704 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.704 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.704 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.704 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.705 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.705 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.705 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.705 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.705 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.705 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.705 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.706 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.707 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.707 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.707 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.707 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.707 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.707 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.707 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.708 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.708 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.708 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.708 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.708 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.708 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.708 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.709 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.710 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.710 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.710 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.710 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.710 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.710 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.710 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.711 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.711 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.711 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.711 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.711 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.711 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.711 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.712 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.713 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.713 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.713 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.713 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.713 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.713 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.713 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.714 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.715 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.716 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.716 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.716 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.716 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.716 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.717 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.717 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.717 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.717 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.717 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.717 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.718 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.718 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.718 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.718 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.718 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.718 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.719 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.719 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.719 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.719 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.719 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.719 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.720 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.720 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.720 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.720 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.720 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.720 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.721 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.721 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.721 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.721 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.721 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.721 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.721 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.722 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.722 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.722 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.722 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.722 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.722 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.723 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.723 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.723 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.723 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.723 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.723 189020 DEBUG oslo_service.service [None req-9085de0d-08b5-4c5d-9ac7-94261a9e9b41 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.725 189020 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.740 189020 INFO nova.virt.node [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Determined node identity 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from /var/lib/nova/compute_id#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.741 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.742 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.742 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.743 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.753 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd38b9e9370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.755 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd38b9e9370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.756 189020 INFO nova.virt.libvirt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.760 189020 INFO nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Libvirt host capabilities <capabilities>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <host>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <uuid>022190eb-356a-449e-bedd-18333ca89982</uuid>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <cpu>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <arch>x86_64</arch>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model>EPYC-Rome-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <vendor>AMD</vendor>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <microcode version='16777317'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <signature family='23' model='49' stepping='0'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <maxphysaddr mode='emulate' bits='40'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='x2apic'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='tsc-deadline'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='osxsave'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='hypervisor'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='tsc_adjust'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='spec-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='stibp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='arch-capabilities'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='cmp_legacy'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='topoext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='virt-ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='lbrv'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='tsc-scale'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='vmcb-clean'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='pause-filter'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='pfthreshold'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='svme-addr-chk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='rdctl-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='skip-l1dfl-vmentry'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='mds-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature name='pschange-mc-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <pages unit='KiB' size='4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <pages unit='KiB' size='2048'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <pages unit='KiB' size='1048576'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </cpu>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <power_management>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <suspend_mem/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <suspend_disk/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <suspend_hybrid/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </power_management>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <iommu support='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <migration_features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <live/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <uri_transports>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <uri_transport>tcp</uri_transport>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <uri_transport>rdma</uri_transport>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </uri_transports>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </migration_features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <topology>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <cells num='1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <cell id='0'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          <memory unit='KiB'>7864284</memory>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          <pages unit='KiB' size='4'>1966071</pages>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          <pages unit='KiB' size='2048'>0</pages>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          <pages unit='KiB' size='1048576'>0</pages>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          <distances>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <sibling id='0' value='10'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          </distances>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          <cpus num='8'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:          </cpus>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        </cell>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </cells>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </topology>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <cache>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </cache>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <secmodel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model>selinux</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <doi>0</doi>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </secmodel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <secmodel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model>dac</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <doi>0</doi>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </secmodel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </host>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <guest>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <os_type>hvm</os_type>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <arch name='i686'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <wordsize>32</wordsize>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <domain type='qemu'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <domain type='kvm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </arch>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <pae/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <nonpae/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <acpi default='on' toggle='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <apic default='on' toggle='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <cpuselection/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <deviceboot/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <disksnapshot default='on' toggle='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <externalSnapshot/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </guest>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <guest>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <os_type>hvm</os_type>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <arch name='x86_64'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <wordsize>64</wordsize>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <domain type='qemu'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <domain type='kvm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </arch>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <acpi default='on' toggle='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <apic default='on' toggle='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <cpuselection/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <deviceboot/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <disksnapshot default='on' toggle='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <externalSnapshot/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </guest>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 
Feb 18 09:44:32 np0005623263 nova_compute[189016]: </capabilities>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: #033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.768 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.771 189020 DEBUG nova.virt.libvirt.volume.mount [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.773 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 18 09:44:32 np0005623263 nova_compute[189016]: <domainCapabilities>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <domain>kvm</domain>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <arch>i686</arch>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <vcpu max='4096'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <iothreads supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <os supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <enum name='firmware'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <loader supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>rom</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pflash</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='readonly'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>yes</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='secure'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </loader>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </os>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <cpu>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='maximumMigratable'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <vendor>AMD</vendor>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='succor'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='custom' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='ClearwaterForest'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ddpd-u'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sha512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='ClearwaterForest-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ddpd-u'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sha512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Dhyana-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Turin'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbpb'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Turin-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbpb'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-128'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-256'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-128'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-256'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v6'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v7'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='KnightsMill'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512er'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512pf'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='KnightsMill-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512er'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512pf'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G4-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tbm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G5-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tbm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='athlon'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='athlon-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='core2duo'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='core2duo-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='coreduo'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='coreduo-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='n270'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='n270-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='phenom'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='phenom-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </cpu>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <memoryBacking supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <enum name='sourceType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>file</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>anonymous</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>memfd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </memoryBacking>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <devices>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <disk supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='diskDevice'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>disk</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>cdrom</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>floppy</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>lun</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='bus'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>fdc</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>scsi</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>sata</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-non-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </disk>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <graphics supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vnc</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>egl-headless</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>dbus</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </graphics>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <video supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='modelType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vga</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>cirrus</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>none</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>bochs</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>ramfb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </video>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <hostdev supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='mode'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>subsystem</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='startupPolicy'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>default</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>mandatory</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>requisite</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>optional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='subsysType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pci</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>scsi</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='capsType'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='pciBackend'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </hostdev>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <rng supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-non-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>random</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>egd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>builtin</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </rng>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <filesystem supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='driverType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>path</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>handle</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtiofs</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </filesystem>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <tpm supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tpm-tis</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tpm-crb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>emulator</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>external</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendVersion'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>2.0</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </tpm>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <redirdev supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='bus'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </redirdev>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <channel supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pty</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>unix</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </channel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <crypto supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>qemu</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>builtin</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </crypto>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <interface supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>default</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>passt</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </interface>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <panic supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>isa</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>hyperv</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </panic>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <console supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>null</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vc</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pty</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>dev</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>file</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pipe</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>stdio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>udp</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tcp</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>unix</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>qemu-vdagent</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>dbus</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </console>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </devices>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <gic supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <vmcoreinfo supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <genid supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <backingStoreInput supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <backup supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <async-teardown supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <s390-pv supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <ps2 supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <tdx supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <sev supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <sgx supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <hyperv supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='features'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>relaxed</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vapic</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>spinlocks</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vpindex</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>runtime</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>synic</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>stimer</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>reset</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vendor_id</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>frequencies</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>reenlightenment</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tlbflush</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>ipi</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>avic</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>emsr_bitmap</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>xmm_input</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <defaults>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <spinlocks>4095</spinlocks>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <stimer_direct>on</stimer_direct>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <tlbflush_direct>on</tlbflush_direct>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <tlbflush_extended>on</tlbflush_extended>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </defaults>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </hyperv>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <launchSecurity supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: </domainCapabilities>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.779 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 18 09:44:32 np0005623263 nova_compute[189016]: <domainCapabilities>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <domain>kvm</domain>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <arch>i686</arch>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <vcpu max='240'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <iothreads supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <os supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <enum name='firmware'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <loader supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>rom</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pflash</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='readonly'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>yes</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='secure'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </loader>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </os>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <cpu>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='maximumMigratable'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <vendor>AMD</vendor>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='succor'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='custom' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='ClearwaterForest'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ddpd-u'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sha512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='ClearwaterForest-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ddpd-u'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sha512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Dhyana-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Turin'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbpb'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Turin-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbpb'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-128'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-256'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-128'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-256'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v6'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v7'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='KnightsMill'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512er'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512pf'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='KnightsMill-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512er'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512pf'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G4-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tbm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G5-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tbm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='athlon'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='athlon-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='core2duo'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='core2duo-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='coreduo'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='coreduo-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='n270'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='n270-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='phenom'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='phenom-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </cpu>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <memoryBacking supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <enum name='sourceType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>file</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>anonymous</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>memfd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </memoryBacking>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <devices>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <disk supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='diskDevice'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>disk</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>cdrom</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>floppy</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>lun</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='bus'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>ide</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>fdc</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>scsi</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>sata</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-non-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </disk>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <graphics supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vnc</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>egl-headless</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>dbus</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </graphics>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <video supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='modelType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vga</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>cirrus</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>none</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>bochs</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>ramfb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </video>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <hostdev supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='mode'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>subsystem</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='startupPolicy'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>default</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>mandatory</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>requisite</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>optional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='subsysType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pci</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>scsi</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='capsType'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='pciBackend'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </hostdev>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <rng supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtio-non-transitional</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>random</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>egd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>builtin</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </rng>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <filesystem supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='driverType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>path</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>handle</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>virtiofs</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </filesystem>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <tpm supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tpm-tis</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tpm-crb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>emulator</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>external</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendVersion'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>2.0</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </tpm>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <redirdev supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='bus'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </redirdev>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <channel supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pty</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>unix</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </channel>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <crypto supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>qemu</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>builtin</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </crypto>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <interface supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='backendType'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>default</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>passt</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </interface>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <panic supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>isa</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>hyperv</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </panic>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <console supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>null</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vc</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pty</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>dev</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>file</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pipe</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>stdio</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>udp</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tcp</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>unix</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>qemu-vdagent</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>dbus</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </console>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </devices>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <gic supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <vmcoreinfo supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <genid supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <backingStoreInput supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <backup supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <async-teardown supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <s390-pv supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <ps2 supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <tdx supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <sev supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <sgx supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <hyperv supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='features'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>relaxed</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vapic</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>spinlocks</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vpindex</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>runtime</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>synic</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>stimer</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>reset</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>vendor_id</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>frequencies</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>reenlightenment</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>tlbflush</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>ipi</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>avic</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>emsr_bitmap</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>xmm_input</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <defaults>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <spinlocks>4095</spinlocks>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <stimer_direct>on</stimer_direct>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <tlbflush_direct>on</tlbflush_direct>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <tlbflush_extended>on</tlbflush_extended>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </defaults>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </hyperv>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <launchSecurity supported='no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </features>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: </domainCapabilities>
Feb 18 09:44:32 np0005623263 nova_compute[189016]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.839 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 18 09:44:32 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.844 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 18 09:44:32 np0005623263 nova_compute[189016]: <domainCapabilities>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <domain>kvm</domain>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <arch>x86_64</arch>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <vcpu max='4096'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <iothreads supported='yes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <os supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <enum name='firmware'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>efi</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <loader supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>rom</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>pflash</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='readonly'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>yes</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='secure'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>yes</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </loader>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  </os>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:  <cpu>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <enum name='maximumMigratable'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <vendor>AMD</vendor>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='succor'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:    <mode name='custom' supported='yes'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='ClearwaterForest'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ddpd-u'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sha512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='ClearwaterForest-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ddpd-u'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sha512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm3'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sm4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Cooperlake-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Denverton-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Dhyana-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Genoa-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Milan-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Rome-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Turin'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbpb'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-Turin-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amd-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='auto-ibrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vp2intersect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fs-gs-base-ns'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibpb-brtype'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='no-nested-data-bp'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='null-sel-clr-base'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='perfmon-v2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbpb'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='srso-user-kernel-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='stibp-always-on'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='EPYC-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-128'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-256'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='GraniteRapids-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-128'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-256'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx10-512'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='prefetchiti'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Haswell-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-noTSX'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v6'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Icelake-Server-v7'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='IvyBridge-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='KnightsMill'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512er'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512pf'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='KnightsMill-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4fmaps'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-4vnniw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512er'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512pf'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G4-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tbm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Opteron_G5-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fma4'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tbm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xop'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SapphireRapids-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='amx-tile'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-bf16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-fp16'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512-vpopcntdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bitalg'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vbmi2'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrc'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fzrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='la57'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='taa-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='tsx-ldtrk'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='SierraForest-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ifma'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-ne-convert'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx-vnni-int8'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bhi-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='bus-lock-detect'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='cmpccxadd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fbsdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='fsrs'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='intel-psfd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ipred-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='lam'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='mcdt-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pbrsb-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='psdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rrsba-ctrl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='sbdr-ssdp-no'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='serialize'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vaes'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='vpclmulqdq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Client-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v1'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v2'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v3'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v4'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:      <blockers model='Skylake-Server-v5'>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:32 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Snowridge'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='mpx'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v2'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v3'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='core-capability'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='split-lock-detect'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Snowridge-v4'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='cldemote'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='gfni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdir64b'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='movdiri'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='xsaves'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='athlon'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='athlon-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='core2duo'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='core2duo-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='coreduo'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='coreduo-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='n270'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='n270-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='ss'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='phenom'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='phenom-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnow'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='3dnowext'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  </cpu>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <memoryBacking supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <enum name='sourceType'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <value>file</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <value>anonymous</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <value>memfd</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  </memoryBacking>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <devices>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <disk supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='diskDevice'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>disk</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>cdrom</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>floppy</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>lun</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='bus'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>fdc</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>scsi</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>sata</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio-transitional</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio-non-transitional</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </disk>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <graphics supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>vnc</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>egl-headless</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>dbus</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </graphics>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <video supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='modelType'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>vga</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>cirrus</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>none</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>bochs</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>ramfb</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </video>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <hostdev supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='mode'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>subsystem</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='startupPolicy'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>default</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>mandatory</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>requisite</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>optional</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='subsysType'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>pci</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>scsi</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='capsType'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='pciBackend'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </hostdev>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <rng supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio-transitional</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtio-non-transitional</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>random</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>egd</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>builtin</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </rng>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <filesystem supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='driverType'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>path</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>handle</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>virtiofs</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </filesystem>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <tpm supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>tpm-tis</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>tpm-crb</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>emulator</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>external</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='backendVersion'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>2.0</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </tpm>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <redirdev supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='bus'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>usb</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </redirdev>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <channel supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>pty</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>unix</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </channel>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <crypto supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='model'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>qemu</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='backendModel'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>builtin</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </crypto>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <interface supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='backendType'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>default</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>passt</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </interface>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <panic supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='model'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>isa</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>hyperv</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </panic>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <console supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>null</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>vc</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>pty</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>dev</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>file</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>pipe</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>stdio</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>udp</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>tcp</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>unix</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>qemu-vdagent</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>dbus</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </console>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  </devices>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <features>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <gic supported='no'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <vmcoreinfo supported='yes'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <genid supported='yes'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <backingStoreInput supported='yes'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <backup supported='yes'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <async-teardown supported='yes'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <s390-pv supported='no'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <ps2 supported='yes'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <tdx supported='no'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <sev supported='no'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <sgx supported='no'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <hyperv supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='features'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>relaxed</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>vapic</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>spinlocks</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>vpindex</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>runtime</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>synic</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>stimer</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>reset</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>vendor_id</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>frequencies</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>reenlightenment</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>tlbflush</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>ipi</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>avic</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>emsr_bitmap</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>xmm_input</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <defaults>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <spinlocks>4095</spinlocks>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <stimer_direct>on</stimer_direct>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <tlbflush_direct>on</tlbflush_direct>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <tlbflush_extended>on</tlbflush_extended>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </defaults>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </hyperv>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <launchSecurity supported='no'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  </features>
Feb 18 09:44:33 np0005623263 nova_compute[189016]: </domainCapabilities>
Feb 18 09:44:33 np0005623263 nova_compute[189016]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 18 09:44:33 np0005623263 nova_compute[189016]: 2026-02-18 14:44:32.921 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 18 09:44:33 np0005623263 nova_compute[189016]: <domainCapabilities>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <path>/usr/libexec/qemu-kvm</path>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <domain>kvm</domain>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <arch>x86_64</arch>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <vcpu max='240'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <iothreads supported='yes'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <os supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <enum name='firmware'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <loader supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='type'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>rom</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>pflash</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='readonly'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>yes</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='secure'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>no</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </loader>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  </os>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:  <cpu>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <mode name='host-passthrough' supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='hostPassthroughMigratable'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <mode name='maximum' supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <enum name='maximumMigratable'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>on</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <value>off</value>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </enum>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <mode name='host-model' supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <vendor>AMD</vendor>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='x2apic'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-deadline'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='hypervisor'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc_adjust'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='spec-ctrl'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='stibp'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='ssbd'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='cmp_legacy'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='overflow-recov'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='succor'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='ibrs'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='amd-ssbd'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='virt-ssbd'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='lbrv'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='tsc-scale'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='vmcb-clean'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='flushbyasid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='pause-filter'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='pfthreshold'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='svme-addr-chk'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <feature policy='disable' name='xsaves'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    </mode>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:    <mode name='custom' supported='yes'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-IBRS'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v2'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v3'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Broadwell-v4'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='ibrs-all'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v1'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512vnni'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='erms'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='hle'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='invpcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pcid'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='pku'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='rtm'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      </blockers>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:      <blockers model='Cascadelake-Server-v2'>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512bw'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512cd'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512dq'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512f'/>
Feb 18 09:44:33 np0005623263 nova_compute[189016]:        <feature name='avx512vl'/>
Feb 18 09:45:58 np0005623263 python3.9[206176]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:45:58 np0005623263 rsyslogd[1015]: imjournal: 3020 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 18 09:45:58 np0005623263 python3.9[206255]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.wvwwpoz5 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:45:59 np0005623263 python3.9[206405]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:00 np0005623263 python3.9[206829]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Feb 18 09:46:01 np0005623263 python3.9[206982]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 18 09:46:02 np0005623263 python3[207137]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 18 09:46:05 np0005623263 podman[207152]: 2026-02-18 14:46:05.036612602 +0000 UTC m=+2.416221755 image pull ba17a9079b86bdce2cd9e03cad5d6d4d255cc298efd741b09239c34192e5621b quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 18 09:46:05 np0005623263 podman[207249]: 2026-02-18 14:46:05.167945943 +0000 UTC m=+0.047938674 container create 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, distribution-scope=public, config_id=openstack_network_exporter, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, release=1770267347, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 18 09:46:05 np0005623263 podman[207249]: 2026-02-18 14:46:05.146047956 +0000 UTC m=+0.026040687 image pull ba17a9079b86bdce2cd9e03cad5d6d4d255cc298efd741b09239c34192e5621b quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 18 09:46:05 np0005623263 python3[207137]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Feb 18 09:46:05 np0005623263 python3.9[207440]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:46:07 np0005623263 python3.9[207595]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:07 np0005623263 podman[207643]: 2026-02-18 14:46:07.780887043 +0000 UTC m=+0.072466020 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 09:46:07 np0005623263 python3.9[207689]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:46:08 np0005623263 python3.9[207848]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771425967.9778087-1029-52317788718335/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:09 np0005623263 python3.9[207927]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:46:09 np0005623263 systemd[1]: Reloading.
Feb 18 09:46:09 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:46:09 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:46:10 np0005623263 python3.9[208046]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:46:10 np0005623263 systemd[1]: Reloading.
Feb 18 09:46:10 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:46:10 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:46:10 np0005623263 systemd[1]: Starting openstack_network_exporter container...
Feb 18 09:46:10 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:46:10 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3631d04e25fb788bdc17f1745f224822cf859d28726280595eedf39b8d635313/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 18 09:46:10 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3631d04e25fb788bdc17f1745f224822cf859d28726280595eedf39b8d635313/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Feb 18 09:46:10 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3631d04e25fb788bdc17f1745f224822cf859d28726280595eedf39b8d635313/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 18 09:46:10 np0005623263 systemd[1]: Started /usr/bin/podman healthcheck run 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.
Feb 18 09:46:10 np0005623263 podman[208092]: 2026-02-18 14:46:10.944937236 +0000 UTC m=+0.142392514 container init 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *bridge.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *coverage.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *datapath.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *iface.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *memory.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *ovn.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *pmd_perf.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *pmd_rxq.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: INFO    14:46:10 main.go:48: registering *vswitch.Collector
Feb 18 09:46:10 np0005623263 openstack_network_exporter[208107]: NOTICE  14:46:10 main.go:76: listening on https://:9105/metrics
Feb 18 09:46:10 np0005623263 podman[208092]: 2026-02-18 14:46:10.978179632 +0000 UTC m=+0.175634910 container start 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, name=ubi9/ubi-minimal)
Feb 18 09:46:10 np0005623263 podman[208092]: openstack_network_exporter
Feb 18 09:46:10 np0005623263 systemd[1]: Started openstack_network_exporter container.
Feb 18 09:46:11 np0005623263 podman[208119]: 2026-02-18 14:46:11.065613206 +0000 UTC m=+0.079731052 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9/ubi-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 18 09:46:11 np0005623263 python3.9[208290]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 18 09:46:12 np0005623263 python3.9[208443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:12 np0005623263 python3.9[208569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425972.029403-1074-280971371579524/.source.yaml _original_basename=.go5frh6i follow=False checksum=1a0cfd9743973e78b9a57d5b36467072e7b0c473 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:13 np0005623263 python3.9[208722]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 18 09:46:14 np0005623263 podman[208846]: 2026-02-18 14:46:14.43881919 +0000 UTC m=+0.064972313 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 09:46:14 np0005623263 python3.9[208887]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Feb 18 09:46:15 np0005623263 python3.9[209057]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:15 np0005623263 systemd[1]: Started libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope.
Feb 18 09:46:15 np0005623263 podman[209058]: 2026-02-18 14:46:15.445346694 +0000 UTC m=+0.071081724 container exec b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Feb 18 09:46:15 np0005623263 podman[209058]: 2026-02-18 14:46:15.462315041 +0000 UTC m=+0.088050031 container exec_died b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 18 09:46:15 np0005623263 systemd[1]: libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope: Deactivated successfully.
Feb 18 09:46:16 np0005623263 python3.9[209242]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:16 np0005623263 systemd[1]: Started libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope.
Feb 18 09:46:16 np0005623263 podman[209243]: 2026-02-18 14:46:16.142794934 +0000 UTC m=+0.078698255 container exec b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 18 09:46:16 np0005623263 podman[209263]: 2026-02-18 14:46:16.199185661 +0000 UTC m=+0.048304385 container exec_died b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 18 09:46:16 np0005623263 podman[209243]: 2026-02-18 14:46:16.206160444 +0000 UTC m=+0.142063775 container exec_died b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Feb 18 09:46:16 np0005623263 systemd[1]: libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope: Deactivated successfully.
Feb 18 09:46:16 np0005623263 python3.9[209428]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:17 np0005623263 python3.9[209581]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Feb 18 09:46:18 np0005623263 python3.9[209747]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:18 np0005623263 systemd[1]: Started libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope.
Feb 18 09:46:18 np0005623263 podman[209748]: 2026-02-18 14:46:18.142272747 +0000 UTC m=+0.073826556 container exec 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 18 09:46:18 np0005623263 podman[209748]: 2026-02-18 14:46:18.173994393 +0000 UTC m=+0.105548242 container exec_died 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:46:18 np0005623263 systemd[1]: libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope: Deactivated successfully.
Feb 18 09:46:18 np0005623263 python3.9[209933]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:19 np0005623263 systemd[1]: Started libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope.
Feb 18 09:46:19 np0005623263 podman[209934]: 2026-02-18 14:46:19.041669578 +0000 UTC m=+0.059239762 container exec 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 09:46:19 np0005623263 podman[209954]: 2026-02-18 14:46:19.100264922 +0000 UTC m=+0.049291910 container exec_died 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 09:46:19 np0005623263 podman[209934]: 2026-02-18 14:46:19.105080269 +0000 UTC m=+0.122650413 container exec_died 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Feb 18 09:46:19 np0005623263 systemd[1]: libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope: Deactivated successfully.
Feb 18 09:46:19 np0005623263 python3.9[210119]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:20 np0005623263 python3.9[210272]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Feb 18 09:46:20 np0005623263 python3.9[210438]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:21 np0005623263 systemd[1]: Started libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope.
Feb 18 09:46:21 np0005623263 podman[210439]: 2026-02-18 14:46:21.055496409 +0000 UTC m=+0.113694207 container exec 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true)
Feb 18 09:46:21 np0005623263 podman[210439]: 2026-02-18 14:46:21.064913897 +0000 UTC m=+0.123111675 container exec_died 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 18 09:46:21 np0005623263 systemd[1]: libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope: Deactivated successfully.
Feb 18 09:46:21 np0005623263 python3.9[210623]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:21 np0005623263 systemd[1]: Started libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope.
Feb 18 09:46:21 np0005623263 podman[210624]: 2026-02-18 14:46:21.851614859 +0000 UTC m=+0.071003622 container exec 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:46:21 np0005623263 podman[210643]: 2026-02-18 14:46:21.910165802 +0000 UTC m=+0.048741955 container exec_died 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 18 09:46:21 np0005623263 podman[210624]: 2026-02-18 14:46:21.915498863 +0000 UTC m=+0.134887626 container exec_died 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, io.buildah.version=1.43.0, tcib_managed=true, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2)
Feb 18 09:46:21 np0005623263 systemd[1]: libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope: Deactivated successfully.
Feb 18 09:46:22 np0005623263 python3.9[210808]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:23 np0005623263 python3.9[210963]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Feb 18 09:46:23 np0005623263 podman[210977]: 2026-02-18 14:46:23.306995643 +0000 UTC m=+0.056806498 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 09:46:23 np0005623263 python3.9[211151]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:24 np0005623263 systemd[1]: Started libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope.
Feb 18 09:46:24 np0005623263 podman[211152]: 2026-02-18 14:46:24.050546728 +0000 UTC m=+0.075021768 container exec 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 09:46:24 np0005623263 podman[211152]: 2026-02-18 14:46:24.083527317 +0000 UTC m=+0.108002387 container exec_died 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 09:46:24 np0005623263 systemd[1]: libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope: Deactivated successfully.
Feb 18 09:46:24 np0005623263 podman[211308]: 2026-02-18 14:46:24.535888778 +0000 UTC m=+0.076572109 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260216, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:46:24 np0005623263 python3.9[211358]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:24 np0005623263 systemd[1]: Started libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope.
Feb 18 09:46:24 np0005623263 podman[211359]: 2026-02-18 14:46:24.784438728 +0000 UTC m=+0.075730396 container exec 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 09:46:24 np0005623263 podman[211379]: 2026-02-18 14:46:24.845229281 +0000 UTC m=+0.052220238 container exec_died 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 18 09:46:24 np0005623263 podman[211359]: 2026-02-18 14:46:24.873784423 +0000 UTC m=+0.165076061 container exec_died 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 18 09:46:24 np0005623263 systemd[1]: libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope: Deactivated successfully.
Feb 18 09:46:25 np0005623263 python3.9[211544]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:26 np0005623263 python3.9[211697]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Feb 18 09:46:26 np0005623263 python3.9[211863]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:26 np0005623263 systemd[1]: Started libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope.
Feb 18 09:46:26 np0005623263 podman[211864]: 2026-02-18 14:46:26.970643621 +0000 UTC m=+0.160789578 container exec a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 09:46:27 np0005623263 podman[211885]: 2026-02-18 14:46:27.063233821 +0000 UTC m=+0.083202883 container exec_died a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 09:46:27 np0005623263 podman[211864]: 2026-02-18 14:46:27.093465198 +0000 UTC m=+0.283611095 container exec_died a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 09:46:27 np0005623263 systemd[1]: libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope: Deactivated successfully.
Feb 18 09:46:27 np0005623263 podman[211881]: 2026-02-18 14:46:27.212023833 +0000 UTC m=+0.232350495 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Feb 18 09:46:27 np0005623263 python3.9[212076]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:27 np0005623263 systemd[1]: Started libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope.
Feb 18 09:46:27 np0005623263 podman[212077]: 2026-02-18 14:46:27.932820308 +0000 UTC m=+0.161438926 container exec a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 09:46:27 np0005623263 podman[212096]: 2026-02-18 14:46:27.996161067 +0000 UTC m=+0.053595293 container exec_died a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 09:46:28 np0005623263 podman[212077]: 2026-02-18 14:46:28.002661058 +0000 UTC m=+0.231279666 container exec_died a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 09:46:28 np0005623263 systemd[1]: libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope: Deactivated successfully.
Feb 18 09:46:28 np0005623263 python3.9[212261]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:29 np0005623263 python3.9[212414]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Feb 18 09:46:29 np0005623263 python3.9[212581]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:30 np0005623263 systemd[1]: Started libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope.
Feb 18 09:46:30 np0005623263 podman[212582]: 2026-02-18 14:46:30.022276681 +0000 UTC m=+0.146601604 container exec 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1770267347)
Feb 18 09:46:30 np0005623263 podman[212601]: 2026-02-18 14:46:30.090226792 +0000 UTC m=+0.052383391 container exec_died 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, release=1770267347, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, vcs-type=git, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 18 09:46:30 np0005623263 podman[212582]: 2026-02-18 14:46:30.155678177 +0000 UTC m=+0.280003080 container exec_died 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 18 09:46:30 np0005623263 systemd[1]: libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope: Deactivated successfully.
Feb 18 09:46:30 np0005623263 python3.9[212766]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:46:30 np0005623263 systemd[1]: Started libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope.
Feb 18 09:46:30 np0005623263 podman[212767]: 2026-02-18 14:46:30.824859552 +0000 UTC m=+0.067265364 container exec 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.7, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 18 09:46:30 np0005623263 podman[212767]: 2026-02-18 14:46:30.863368237 +0000 UTC m=+0.105774069 container exec_died 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Feb 18 09:46:30 np0005623263 systemd[1]: libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope: Deactivated successfully.
Feb 18 09:46:31 np0005623263 python3.9[212951]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:32 np0005623263 python3.9[213104]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:32 np0005623263 nova_compute[189016]: 2026-02-18 14:46:32.339 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:32 np0005623263 nova_compute[189016]: 2026-02-18 14:46:32.358 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:32 np0005623263 nova_compute[189016]: 2026-02-18 14:46:32.359 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 09:46:32 np0005623263 nova_compute[189016]: 2026-02-18 14:46:32.359 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 09:46:32 np0005623263 nova_compute[189016]: 2026-02-18 14:46:32.371 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 09:46:32 np0005623263 nova_compute[189016]: 2026-02-18 14:46:32.372 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:32 np0005623263 python3.9[213257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:33 np0005623263 nova_compute[189016]: 2026-02-18 14:46:33.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:33 np0005623263 nova_compute[189016]: 2026-02-18 14:46:33.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:33 np0005623263 nova_compute[189016]: 2026-02-18 14:46:33.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:33 np0005623263 nova_compute[189016]: 2026-02-18 14:46:33.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 09:46:33 np0005623263 python3.9[213381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771425992.242782-1305-174945209723443/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:33 np0005623263 python3.9[213534]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.049 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.081 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.081 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.081 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.226 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.227 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5796MB free_disk=72.26982116699219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.227 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.228 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.285 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.285 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.305 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.318 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.319 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 09:46:34 np0005623263 nova_compute[189016]: 2026-02-18 14:46:34.319 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:46:34 np0005623263 python3.9[213687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:34 np0005623263 python3.9[213766]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:35 np0005623263 python3.9[213919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:35 np0005623263 python3.9[213998]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hng5r3t1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:36 np0005623263 python3.9[214152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:37 np0005623263 python3.9[214231]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:37 np0005623263 python3.9[214384]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:46:38 np0005623263 podman[214509]: 2026-02-18 14:46:38.246784323 +0000 UTC m=+0.051870898 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 09:46:38 np0005623263 python3[214553]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 18 09:46:39 np0005623263 python3.9[214717]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:39 np0005623263 python3.9[214796]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:40 np0005623263 python3.9[214949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:40 np0005623263 python3.9[215028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:41 np0005623263 podman[215182]: 2026-02-18 14:46:41.19160742 +0000 UTC m=+0.050235977 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 18 09:46:41 np0005623263 python3.9[215184]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:46:41.412 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:46:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:46:41.414 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:46:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:46:41.415 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:46:41 np0005623263 python3.9[215284]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:42 np0005623263 python3.9[215437]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:42 np0005623263 python3.9[215516]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:43 np0005623263 python3.9[215669]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:46:43 np0005623263 python3.9[215795]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771426002.9059267-1430-17176839192026/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:44 np0005623263 python3.9[215948]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:44 np0005623263 podman[215949]: 2026-02-18 14:46:44.535750202 +0000 UTC m=+0.045987567 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 09:46:45 np0005623263 python3.9[216121]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:46:45 np0005623263 python3.9[216277]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:46 np0005623263 python3.9[216430]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:46:47 np0005623263 python3.9[216584]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:46:47 np0005623263 python3.9[216739]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:46:48 np0005623263 python3.9[216895]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:46:48 np0005623263 systemd[1]: session-25.scope: Deactivated successfully.
Feb 18 09:46:48 np0005623263 systemd[1]: session-25.scope: Consumed 1min 31.385s CPU time.
Feb 18 09:46:48 np0005623263 systemd-logind[831]: Session 25 logged out. Waiting for processes to exit.
Feb 18 09:46:48 np0005623263 systemd-logind[831]: Removed session 25.
Feb 18 09:46:53 np0005623263 podman[216923]: 2026-02-18 14:46:53.715082362 +0000 UTC m=+0.047520890 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 09:46:54 np0005623263 systemd-logind[831]: New session 26 of user zuul.
Feb 18 09:46:54 np0005623263 systemd[1]: Started Session 26 of User zuul.
Feb 18 09:46:54 np0005623263 podman[217003]: 2026-02-18 14:46:54.722162025 +0000 UTC m=+0.055812413 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 18 09:46:55 np0005623263 python3.9[217123]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:46:55 np0005623263 systemd[1]: Reloading.
Feb 18 09:46:55 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:46:55 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:46:56 np0005623263 python3.9[217314]: ansible-ansible.builtin.service_facts Invoked
Feb 18 09:46:56 np0005623263 network[217331]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 18 09:46:56 np0005623263 network[217332]: 'network-scripts' will be removed from distribution in near future.
Feb 18 09:46:56 np0005623263 network[217333]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 18 09:46:57 np0005623263 podman[217368]: 2026-02-18 14:46:57.33984196 +0000 UTC m=+0.090493391 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 09:46:59 np0005623263 python3.9[217632]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:46:59 np0005623263 podman[204930]: time="2026-02-18T14:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 09:46:59 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21988 "" "Go-http-client/1.1"
Feb 18 09:46:59 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3003 "" "Go-http-client/1.1"
Feb 18 09:47:00 np0005623263 python3.9[217790]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:00 np0005623263 python3.9[217943]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:01 np0005623263 openstack_network_exporter[208107]: ERROR   14:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 09:47:01 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:47:01 np0005623263 openstack_network_exporter[208107]: ERROR   14:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 09:47:01 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:47:01 np0005623263 python3.9[218099]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:47:02 np0005623263 python3.9[218251]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 18 09:47:03 np0005623263 python3.9[218404]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:47:03 np0005623263 systemd[1]: Reloading.
Feb 18 09:47:03 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:47:03 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:47:03 np0005623263 python3.9[218600]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:47:04 np0005623263 python3.9[218754]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:05 np0005623263 python3.9[218904]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:47:06 np0005623263 python3.9[219056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:06 np0005623263 python3.9[219177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771426025.3411572-120-201886315147006/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:07 np0005623263 python3.9[219327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:07 np0005623263 python3.9[219448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771426026.7634091-135-35625856781902/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:08 np0005623263 python3.9[219601]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Feb 18 09:47:08 np0005623263 podman[219603]: 2026-02-18 14:47:08.532169847 +0000 UTC m=+0.051010478 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 18 09:47:09 np0005623263 python3.9[219776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:10 np0005623263 python3.9[219897]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771426029.1980104-181-96115486366205/.source.conf _original_basename=ceilometer.conf follow=False checksum=06bb8599d9c8a601385c703338dd9ca518a4891f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:10 np0005623263 python3.9[220047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:11 np0005623263 python3.9[220168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771426030.3710237-181-1651925865804/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:11 np0005623263 podman[220169]: 2026-02-18 14:47:11.308744468 +0000 UTC m=+0.071959978 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, build-date=2026-02-05T04:57:10Z, vcs-type=git, version=9.7)
Feb 18 09:47:11 np0005623263 python3.9[220339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:12 np0005623263 python3.9[220460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771426031.3444364-181-167513081048547/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:12 np0005623263 python3.9[220610]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:47:13 np0005623263 python3.9[220762]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:47:13 np0005623263 python3.9[220914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:14 np0005623263 python3.9[221035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771426033.4780045-240-70388674546670/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:14 np0005623263 podman[221159]: 2026-02-18 14:47:14.699870954 +0000 UTC m=+0.048261491 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 18 09:47:14 np0005623263 python3.9[221205]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:15 np0005623263 python3.9[221361]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:16 np0005623263 python3.9[221514]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:16 np0005623263 python3.9[221667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:17 np0005623263 python3.9[221791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771426036.2224267-279-92126357504180/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:17 np0005623263 python3.9[221868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:17 np0005623263 python3.9[221994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771426036.2224267-279-92126357504180/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:18 np0005623263 python3.9[222147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:18 np0005623263 python3.9[222271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771426037.9537032-279-144736876806550/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:19 np0005623263 python3.9[222424]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:20 np0005623263 python3.9[222577]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:20 np0005623263 python3.9[222730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:21 np0005623263 python3.9[222854]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771426040.309993-337-224394609993628/.source.json _original_basename=.3ezhjzdv follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:21 np0005623263 python3.9[223004]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:23 np0005623263 python3.9[223428]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Feb 18 09:47:24 np0005623263 podman[223552]: 2026-02-18 14:47:24.406757968 +0000 UTC m=+0.046246444 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 09:47:24 np0005623263 python3.9[223605]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.182 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.184 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.184 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.185 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.186 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.186 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.186 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.186 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.186 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.186 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.188 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.188 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.188 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.188 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.189 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.189 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.189 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.189 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.190 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.190 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.190 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.191 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.192 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.192 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.193 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8e1850>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.193 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.194 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.197 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.197 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.197 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.197 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.197 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.197 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.197 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.198 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 ceilometer_agent_compute[198738]: 2026-02-18 14:47:25.199 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 09:47:25 np0005623263 podman[223730]: 2026-02-18 14:47:25.333638054 +0000 UTC m=+0.057599184 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Feb 18 09:47:25 np0005623263 python3[223776]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Feb 18 09:47:25 np0005623263 podman[223814]: 2026-02-18 14:47:25.724132506 +0000 UTC m=+0.046914103 container create 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 18 09:47:25 np0005623263 podman[223814]: 2026-02-18 14:47:25.695959152 +0000 UTC m=+0.018740769 image pull 5a0c248a731dc2e1754b1906fede374f0f92203547e5b10eb435ef1a64b36296 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Feb 18 09:47:25 np0005623263 python3[223776]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49 --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Feb 18 09:47:26 np0005623263 python3.9[224005]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:47:27 np0005623263 python3.9[224160]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:27 np0005623263 python3.9[224237]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:47:27 np0005623263 podman[224238]: 2026-02-18 14:47:27.603894168 +0000 UTC m=+0.087478525 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:47:28 np0005623263 python3.9[224415]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771426047.5475953-415-276834519098526/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:28 np0005623263 python3.9[224492]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:47:29 np0005623263 systemd[1]: Reloading.
Feb 18 09:47:29 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:47:29 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:47:29 np0005623263 podman[204930]: time="2026-02-18T14:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 09:47:29 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25037 "" "Go-http-client/1.1"
Feb 18 09:47:29 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Feb 18 09:47:29 np0005623263 python3.9[224611]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:47:29 np0005623263 systemd[1]: Reloading.
Feb 18 09:47:30 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:47:30 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:47:30 np0005623263 systemd[1]: Starting ceilometer_agent_ipmi container...
Feb 18 09:47:30 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:47:30 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:30 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:30 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:30 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:30 np0005623263 systemd[1]: Started /usr/bin/podman healthcheck run 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.
Feb 18 09:47:30 np0005623263 podman[224657]: 2026-02-18 14:47:30.378465702 +0000 UTC m=+0.130238900 container init 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + sudo -E kolla_set_configs
Feb 18 09:47:30 np0005623263 podman[224657]: 2026-02-18 14:47:30.402131199 +0000 UTC m=+0.153904397 container start 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:47:30 np0005623263 podman[224657]: ceilometer_agent_ipmi
Feb 18 09:47:30 np0005623263 systemd[1]: Started ceilometer_agent_ipmi container.
Feb 18 09:47:30 np0005623263 podman[224679]: 2026-02-18 14:47:30.453708982 +0000 UTC m=+0.040985335 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 18 09:47:30 np0005623263 systemd[1]: 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22-1779817d116520f9.service: Main process exited, code=exited, status=1/FAILURE
Feb 18 09:47:30 np0005623263 systemd[1]: 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22-1779817d116520f9.service: Failed with result 'exit-code'.
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Validating config file
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Copying service configuration files
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: INFO:__main__:Writing out command to execute
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: ++ cat /run_command
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + ARGS=
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + sudo kolla_copy_cacerts
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + [[ ! -n '' ]]
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + . kolla_extend_start
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + umask 0022
Feb 18 09:47:30 np0005623263 ceilometer_agent_ipmi[224672]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Feb 18 09:47:31 np0005623263 python3.9[224853]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 18 09:47:31 np0005623263 openstack_network_exporter[208107]: ERROR   14:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 09:47:31 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:47:31 np0005623263 openstack_network_exporter[208107]: ERROR   14:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 09:47:31 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.436 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.436 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.437 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.438 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.438 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.438 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.438 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.438 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.438 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.438 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.439 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.440 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.441 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.442 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.443 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.444 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.445 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.446 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.447 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.448 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.449 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.450 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.451 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.452 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.453 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.454 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.454 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.454 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.454 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.454 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.470 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.471 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.471 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 18 09:47:31 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:31.555 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp2_uzt2m7/privsep.sock']
Feb 18 09:47:32 np0005623263 python3.9[225014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.224 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.225 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2_uzt2m7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.082 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.091 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.094 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.094 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Feb 18 09:47:32 np0005623263 nova_compute[189016]: 2026-02-18 14:47:32.320 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:32 np0005623263 nova_compute[189016]: 2026-02-18 14:47:32.322 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 09:47:32 np0005623263 nova_compute[189016]: 2026-02-18 14:47:32.322 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.331 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.332 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.333 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.333 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.333 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.333 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.333 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.333 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.333 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.334 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.334 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.334 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.334 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.336 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.337 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 nova_compute[189016]: 2026-02-18 14:47:32.338 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.338 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.339 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.340 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.341 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.342 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.343 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.344 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.345 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.346 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.347 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.348 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.349 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.350 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.351 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.352 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.353 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.354 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.355 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.356 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Feb 18 09:47:32 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:32.359 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Feb 18 09:47:32 np0005623263 python3.9[225145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771426051.675845-460-565445617073/.source.yaml _original_basename=.psh3e08l follow=False checksum=247392e7788f80ae2a5390cd0b74f941bb0dc6bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:33 np0005623263 nova_compute[189016]: 2026-02-18 14:47:33.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:33 np0005623263 nova_compute[189016]: 2026-02-18 14:47:33.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:33 np0005623263 python3.9[225298]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:33 np0005623263 python3.9[225451]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 18 09:47:34 np0005623263 nova_compute[189016]: 2026-02-18 14:47:34.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:34 np0005623263 nova_compute[189016]: 2026-02-18 14:47:34.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:34 np0005623263 nova_compute[189016]: 2026-02-18 14:47:34.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:34 np0005623263 nova_compute[189016]: 2026-02-18 14:47:34.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 09:47:34 np0005623263 python3.9[225601]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.085 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.086 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.087 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.087 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.225 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.226 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5795MB free_disk=72.3016357421875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.226 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.227 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.291 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.291 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.311 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.322 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.323 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 09:47:35 np0005623263 nova_compute[189016]: 2026-02-18 14:47:35.323 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:47:36 np0005623263 python3.9[226025]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Feb 18 09:47:36 np0005623263 python3.9[226178]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 18 09:47:37 np0005623263 python3[226331]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Feb 18 09:47:37 np0005623263 podman[226364]: 2026-02-18 14:47:37.841540267 +0000 UTC m=+0.054231049 container create a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.openshift.expose-services=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, name=ubi9, managed_by=edpm_ansible, config_id=kepler, release-0.7.12=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 09:47:37 np0005623263 podman[226364]: 2026-02-18 14:47:37.816498161 +0000 UTC m=+0.029188963 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Feb 18 09:47:37 np0005623263 python3[226331]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Feb 18 09:47:38 np0005623263 python3.9[226556]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:47:38 np0005623263 podman[226585]: 2026-02-18 14:47:38.713709741 +0000 UTC m=+0.046585414 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 09:47:39 np0005623263 python3.9[226735]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:39 np0005623263 python3.9[226812]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:47:40 np0005623263 python3.9[226964]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771426059.4826808-557-274248164211820/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:40 np0005623263 python3.9[227041]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 18 09:47:40 np0005623263 systemd[1]: Reloading.
Feb 18 09:47:40 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:47:40 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:47:41 np0005623263 python3.9[227160]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 18 09:47:41 np0005623263 systemd[1]: Reloading.
Feb 18 09:47:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:47:41.412 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:47:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:47:41.414 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:47:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:47:41.414 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:47:41 np0005623263 podman[227162]: 2026-02-18 14:47:41.417399698 +0000 UTC m=+0.065364063 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.7, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Feb 18 09:47:41 np0005623263 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 18 09:47:41 np0005623263 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 18 09:47:41 np0005623263 systemd[1]: Starting kepler container...
Feb 18 09:47:41 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:47:41 np0005623263 systemd[1]: Started /usr/bin/podman healthcheck run a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.
Feb 18 09:47:41 np0005623263 podman[227228]: 2026-02-18 14:47:41.794793261 +0000 UTC m=+0.101940693 container init a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., config_id=kepler, name=ubi9, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 18 09:47:41 np0005623263 podman[227228]: 2026-02-18 14:47:41.823567232 +0000 UTC m=+0.130714634 container start a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release-0.7.12=, version=9.4, managed_by=edpm_ansible, vcs-type=git, config_id=kepler, io.openshift.expose-services=, name=ubi9, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Feb 18 09:47:41 np0005623263 kepler[227243]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 18 09:47:41 np0005623263 podman[227228]: kepler
Feb 18 09:47:41 np0005623263 systemd[1]: Started kepler container.
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.840475       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.841227       1 config.go:293] using gCgroup ID in the BPF program: true
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.841366       1 config.go:295] kernel version: 5.14
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.842137       1 power.go:78] Unable to obtain power, use estimate method
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.842303       1 redfish.go:169] failed to get redfish credential file path
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.843121       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.843173       1 power.go:79] using none to obtain power
Feb 18 09:47:41 np0005623263 kepler[227243]: E0218 14:47:41.843271       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Feb 18 09:47:41 np0005623263 kepler[227243]: E0218 14:47:41.843398       1 exporter.go:154] failed to init GPU accelerators: no devices found
Feb 18 09:47:41 np0005623263 kepler[227243]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 18 09:47:41 np0005623263 kepler[227243]: I0218 14:47:41.846741       1 exporter.go:84] Number of CPUs: 8
Feb 18 09:47:41 np0005623263 podman[227253]: 2026-02-18 14:47:41.897769693 +0000 UTC m=+0.069707015 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, config_id=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, maintainer=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9)
Feb 18 09:47:41 np0005623263 systemd[1]: a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d-764a657440ace2e5.service: Main process exited, code=exited, status=1/FAILURE
Feb 18 09:47:41 np0005623263 systemd[1]: a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d-764a657440ace2e5.service: Failed with result 'exit-code'.
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.220888       1 watcher.go:83] Using in cluster k8s config
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.220925       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Feb 18 09:47:42 np0005623263 kepler[227243]: E0218 14:47:42.221185       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.224007       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.224037       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.226896       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.226924       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.235098       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.235131       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.235148       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240040       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240064       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240067       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240071       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240075       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240085       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240377       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240413       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240428       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240440       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240572       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Feb 18 09:47:42 np0005623263 kepler[227243]: I0218 14:47:42.240865       1 exporter.go:208] Started Kepler in 400.824053ms
Feb 18 09:47:42 np0005623263 python3.9[227438]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 18 09:47:43 np0005623263 python3.9[227591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:47:43 np0005623263 python3.9[227717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771426062.8851855-602-35322114007595/.source.yaml _original_basename=.vdacrj5a follow=False checksum=35e26352b98910bd652d9b8105ca6c8113948672 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:44 np0005623263 podman[227871]: 2026-02-18 14:47:44.785198066 +0000 UTC m=+0.052241171 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 09:47:44 np0005623263 python3.9[227870]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:47:44 np0005623263 systemd[1]: Stopping ceilometer_agent_ipmi container...
Feb 18 09:47:44 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:44.973 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:45.075 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:45.075 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:45.076 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[224672]: 2026-02-18 14:47:45.081 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Feb 18 09:47:45 np0005623263 systemd[1]: libpod-9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.scope: Deactivated successfully.
Feb 18 09:47:45 np0005623263 systemd[1]: libpod-9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.scope: Consumed 2.192s CPU time.
Feb 18 09:47:45 np0005623263 podman[227893]: 2026-02-18 14:47:45.2770596 +0000 UTC m=+0.406258052 container died 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 09:47:45 np0005623263 systemd[1]: 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22-1779817d116520f9.timer: Deactivated successfully.
Feb 18 09:47:45 np0005623263 systemd[1]: Stopped /usr/bin/podman healthcheck run 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.
Feb 18 09:47:45 np0005623263 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22-userdata-shm.mount: Deactivated successfully.
Feb 18 09:47:45 np0005623263 systemd[1]: var-lib-containers-storage-overlay-348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc-merged.mount: Deactivated successfully.
Feb 18 09:47:45 np0005623263 podman[227893]: 2026-02-18 14:47:45.480506649 +0000 UTC m=+0.609705081 container cleanup 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:47:45 np0005623263 podman[227893]: ceilometer_agent_ipmi
Feb 18 09:47:45 np0005623263 podman[227919]: ceilometer_agent_ipmi
Feb 18 09:47:45 np0005623263 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Feb 18 09:47:45 np0005623263 systemd[1]: Stopped ceilometer_agent_ipmi container.
Feb 18 09:47:45 np0005623263 systemd[1]: Starting ceilometer_agent_ipmi container...
Feb 18 09:47:45 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:47:45 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:45 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:45 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:45 np0005623263 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348b421f04fd8fa19b187e7934c105319ee3304942a179ae20763f607ca588dc/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 18 09:47:45 np0005623263 systemd[1]: Started /usr/bin/podman healthcheck run 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.
Feb 18 09:47:45 np0005623263 podman[227932]: 2026-02-18 14:47:45.732493342 +0000 UTC m=+0.177592871 container init 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + sudo -E kolla_set_configs
Feb 18 09:47:45 np0005623263 podman[227932]: 2026-02-18 14:47:45.761464061 +0000 UTC m=+0.206563570 container start 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Validating config file
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Copying service configuration files
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: INFO:__main__:Writing out command to execute
Feb 18 09:47:45 np0005623263 podman[227932]: ceilometer_agent_ipmi
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: ++ cat /run_command
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + ARGS=
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + sudo kolla_copy_cacerts
Feb 18 09:47:45 np0005623263 systemd[1]: Started ceilometer_agent_ipmi container.
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + [[ ! -n '' ]]
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + . kolla_extend_start
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + umask 0022
Feb 18 09:47:45 np0005623263 ceilometer_agent_ipmi[227945]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Feb 18 09:47:45 np0005623263 podman[227952]: 2026-02-18 14:47:45.864187906 +0000 UTC m=+0.093649471 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb 18 09:47:45 np0005623263 systemd[1]: 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22-490c16a3e6d7dd4c.service: Main process exited, code=exited, status=1/FAILURE
Feb 18 09:47:45 np0005623263 systemd[1]: 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22-490c16a3e6d7dd4c.service: Failed with result 'exit-code'.
Feb 18 09:47:46 np0005623263 python3.9[228128]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 09:47:46 np0005623263 systemd[1]: Stopping kepler container...
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.709 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.709 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.709 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.709 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.709 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.709 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.709 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.710 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.711 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.712 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 kepler[227243]: I0218 14:47:46.705657       1 exporter.go:218] Received shutdown signal
Feb 18 09:47:46 np0005623263 kepler[227243]: I0218 14:47:46.706833       1 exporter.go:226] Exiting...
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.713 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.714 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.715 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.716 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.717 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.718 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.719 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.720 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.721 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.722 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.723 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.724 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.725 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.744 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.745 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.747 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 18 09:47:46 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:46.765 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp4oh08080/privsep.sock']
Feb 18 09:47:46 np0005623263 systemd[1]: libpod-a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.scope: Deactivated successfully.
Feb 18 09:47:46 np0005623263 podman[228132]: 2026-02-18 14:47:46.875955849 +0000 UTC m=+0.219670549 container died a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=kepler, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 09:47:46 np0005623263 systemd[1]: a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d-764a657440ace2e5.timer: Deactivated successfully.
Feb 18 09:47:46 np0005623263 systemd[1]: Stopped /usr/bin/podman healthcheck run a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.
Feb 18 09:47:46 np0005623263 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d-userdata-shm.mount: Deactivated successfully.
Feb 18 09:47:46 np0005623263 systemd[1]: var-lib-containers-storage-overlay-d1a358727fd3b4dc7538cdaa89d2f0662222ec8adfe416a406c34d64cce116b0-merged.mount: Deactivated successfully.
Feb 18 09:47:46 np0005623263 podman[228132]: 2026-02-18 14:47:46.914934166 +0000 UTC m=+0.258648866 container cleanup a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_id=kepler, version=9.4)
Feb 18 09:47:46 np0005623263 podman[228132]: kepler
Feb 18 09:47:46 np0005623263 podman[228168]: kepler
Feb 18 09:47:46 np0005623263 systemd[1]: edpm_kepler.service: Deactivated successfully.
Feb 18 09:47:46 np0005623263 systemd[1]: Stopped kepler container.
Feb 18 09:47:46 np0005623263 systemd[1]: Starting kepler container...
Feb 18 09:47:47 np0005623263 systemd[1]: Started libcrun container.
Feb 18 09:47:47 np0005623263 systemd[1]: Started /usr/bin/podman healthcheck run a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.
Feb 18 09:47:47 np0005623263 podman[228181]: 2026-02-18 14:47:47.079239793 +0000 UTC m=+0.093066336 container init a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=kepler)
Feb 18 09:47:47 np0005623263 kepler[228197]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 18 09:47:47 np0005623263 podman[228181]: 2026-02-18 14:47:47.105183204 +0000 UTC m=+0.119009737 container start a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, io.buildah.version=1.29.0, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 18 09:47:47 np0005623263 podman[228181]: kepler
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.112268       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.112561       1 config.go:293] using gCgroup ID in the BPF program: true
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.112587       1 config.go:295] kernel version: 5.14
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.113252       1 power.go:78] Unable to obtain power, use estimate method
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.113283       1 redfish.go:169] failed to get redfish credential file path
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.113548       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.113574       1 power.go:79] using none to obtain power
Feb 18 09:47:47 np0005623263 kepler[228197]: E0218 14:47:47.113601       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Feb 18 09:47:47 np0005623263 kepler[228197]: E0218 14:47:47.113645       1 exporter.go:154] failed to init GPU accelerators: no devices found
Feb 18 09:47:47 np0005623263 kepler[228197]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.116157       1 exporter.go:84] Number of CPUs: 8
Feb 18 09:47:47 np0005623263 systemd[1]: Started kepler container.
Feb 18 09:47:47 np0005623263 podman[228207]: 2026-02-18 14:47:47.195997071 +0000 UTC m=+0.080638695 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 09:47:47 np0005623263 systemd[1]: a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d-206e110868322770.service: Main process exited, code=exited, status=1/FAILURE
Feb 18 09:47:47 np0005623263 systemd[1]: a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d-206e110868322770.service: Failed with result 'exit-code'.
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.417 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.417 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4oh08080/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.298 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.301 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.303 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.303 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.529 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.530 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.531 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.531 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.531 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.531 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.531 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.531 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.531 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.532 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.532 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.532 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.532 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.534 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.535 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.536 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.536 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.536 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.536 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.537 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.538 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.539 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.540 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.540 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.540 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.540 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.541 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.541 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.541 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.541 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.541 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.541 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.541 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.542 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.543 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.544 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.545 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.546 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.547 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.548 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.549 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.550 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.551 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.552 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.553 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.554 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.555 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Feb 18 09:47:47 np0005623263 ceilometer_agent_ipmi[227945]: 2026-02-18 14:47:47.558 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.599894       1 watcher.go:83] Using in cluster k8s config
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.600398       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Feb 18 09:47:47 np0005623263 kepler[228197]: E0218 14:47:47.600807       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.604230       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.604539       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.608429       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.608774       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.615527       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.615918       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.616205       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.621538       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.621800       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.622091       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.622309       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.622514       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.622733       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.623277       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.623538       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.623799       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.624071       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.624447       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Feb 18 09:47:47 np0005623263 kepler[228197]: I0218 14:47:47.625069       1 exporter.go:208] Started Kepler in 512.9732ms
Feb 18 09:47:47 np0005623263 python3.9[228385]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 18 09:47:48 np0005623263 python3.9[228550]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Feb 18 09:47:49 np0005623263 python3.9[228716]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:47:49 np0005623263 systemd[1]: Started libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope.
Feb 18 09:47:49 np0005623263 podman[228717]: 2026-02-18 14:47:49.746047015 +0000 UTC m=+0.094876694 container exec b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 18 09:47:49 np0005623263 podman[228717]: 2026-02-18 14:47:49.783261827 +0000 UTC m=+0.132091506 container exec_died b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 09:47:49 np0005623263 systemd[1]: libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope: Deactivated successfully.
Feb 18 09:47:50 np0005623263 python3.9[228897]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:47:50 np0005623263 systemd[1]: Started libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope.
Feb 18 09:47:50 np0005623263 podman[228898]: 2026-02-18 14:47:50.62715637 +0000 UTC m=+0.077650718 container exec b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:47:50 np0005623263 podman[228917]: 2026-02-18 14:47:50.70143983 +0000 UTC m=+0.062836345 container exec_died b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 18 09:47:50 np0005623263 podman[228898]: 2026-02-18 14:47:50.714344094 +0000 UTC m=+0.164838442 container exec_died b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 18 09:47:50 np0005623263 systemd[1]: libpod-conmon-b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164.scope: Deactivated successfully.
Feb 18 09:47:51 np0005623263 python3.9[229081]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:52 np0005623263 python3.9[229234]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Feb 18 09:47:52 np0005623263 python3.9[229399]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:47:52 np0005623263 systemd[1]: Started libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope.
Feb 18 09:47:52 np0005623263 podman[229400]: 2026-02-18 14:47:52.99445524 +0000 UTC m=+0.108601838 container exec 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 09:47:53 np0005623263 podman[229400]: 2026-02-18 14:47:53.037274167 +0000 UTC m=+0.151420785 container exec_died 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Feb 18 09:47:53 np0005623263 systemd[1]: libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope: Deactivated successfully.
Feb 18 09:47:53 np0005623263 python3.9[229583]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:47:53 np0005623263 systemd[1]: Started libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope.
Feb 18 09:47:53 np0005623263 podman[229584]: 2026-02-18 14:47:53.853293449 +0000 UTC m=+0.120716251 container exec 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 09:47:53 np0005623263 podman[229603]: 2026-02-18 14:47:53.924379297 +0000 UTC m=+0.059156781 container exec_died 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 18 09:47:53 np0005623263 podman[229584]: 2026-02-18 14:47:53.945306637 +0000 UTC m=+0.212729419 container exec_died 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 18 09:47:53 np0005623263 systemd[1]: libpod-conmon-99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05.scope: Deactivated successfully.
Feb 18 09:47:54 np0005623263 podman[229767]: 2026-02-18 14:47:54.562764318 +0000 UTC m=+0.060246259 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 09:47:54 np0005623263 python3.9[229769]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:55 np0005623263 python3.9[229946]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Feb 18 09:47:55 np0005623263 podman[229960]: 2026-02-18 14:47:55.707927287 +0000 UTC m=+0.118525115 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 18 09:47:56 np0005623263 python3.9[230130]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:47:56 np0005623263 systemd[1]: Started libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope.
Feb 18 09:47:56 np0005623263 podman[230131]: 2026-02-18 14:47:56.373729887 +0000 UTC m=+0.122140208 container exec 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.43.0)
Feb 18 09:47:56 np0005623263 podman[230131]: 2026-02-18 14:47:56.407820828 +0000 UTC m=+0.156231129 container exec_died 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 18 09:47:56 np0005623263 systemd[1]: libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope: Deactivated successfully.
Feb 18 09:47:57 np0005623263 python3.9[230315]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:47:57 np0005623263 systemd[1]: Started libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope.
Feb 18 09:47:57 np0005623263 podman[230316]: 2026-02-18 14:47:57.517706907 +0000 UTC m=+0.121199324 container exec 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 09:47:57 np0005623263 podman[230316]: 2026-02-18 14:47:57.552426884 +0000 UTC m=+0.155919311 container exec_died 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 09:47:57 np0005623263 systemd[1]: libpod-conmon-126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630.scope: Deactivated successfully.
Feb 18 09:47:57 np0005623263 podman[230347]: 2026-02-18 14:47:57.790448577 +0000 UTC m=+0.139767504 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 09:47:58 np0005623263 python3.9[230523]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:47:59 np0005623263 python3.9[230676]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Feb 18 09:47:59 np0005623263 podman[204930]: time="2026-02-18T14:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 09:47:59 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28008 "" "Go-http-client/1.1"
Feb 18 09:47:59 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3863 "" "Go-http-client/1.1"
Feb 18 09:48:00 np0005623263 python3.9[230845]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:00 np0005623263 systemd[1]: Started libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope.
Feb 18 09:48:00 np0005623263 podman[230846]: 2026-02-18 14:48:00.176816989 +0000 UTC m=+0.105567679 container exec 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 09:48:00 np0005623263 podman[230846]: 2026-02-18 14:48:00.210551881 +0000 UTC m=+0.139302581 container exec_died 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 09:48:00 np0005623263 systemd[1]: libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope: Deactivated successfully.
Feb 18 09:48:00 np0005623263 python3.9[231030]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:01 np0005623263 systemd[1]: Started libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope.
Feb 18 09:48:01 np0005623263 podman[231031]: 2026-02-18 14:48:01.053937151 +0000 UTC m=+0.101857973 container exec 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 09:48:01 np0005623263 podman[231051]: 2026-02-18 14:48:01.368069511 +0000 UTC m=+0.301406042 container exec_died 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 09:48:01 np0005623263 podman[231031]: 2026-02-18 14:48:01.378900171 +0000 UTC m=+0.426820983 container exec_died 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 09:48:01 np0005623263 systemd[1]: libpod-conmon-4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c.scope: Deactivated successfully.
Feb 18 09:48:01 np0005623263 openstack_network_exporter[208107]: ERROR   14:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 09:48:01 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:48:01 np0005623263 openstack_network_exporter[208107]: ERROR   14:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 09:48:01 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:48:02 np0005623263 python3.9[231216]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:02 np0005623263 python3.9[231371]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Feb 18 09:48:03 np0005623263 python3.9[231537]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:03 np0005623263 systemd[1]: Started libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope.
Feb 18 09:48:03 np0005623263 podman[231538]: 2026-02-18 14:48:03.696082185 +0000 UTC m=+0.092943513 container exec a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 09:48:03 np0005623263 podman[231538]: 2026-02-18 14:48:03.729759596 +0000 UTC m=+0.126620924 container exec_died a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 09:48:03 np0005623263 systemd[1]: libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope: Deactivated successfully.
Feb 18 09:48:04 np0005623263 python3.9[231721]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:04 np0005623263 systemd[1]: Started libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope.
Feb 18 09:48:04 np0005623263 podman[231722]: 2026-02-18 14:48:04.506228456 +0000 UTC m=+0.094717659 container exec a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 09:48:04 np0005623263 podman[231722]: 2026-02-18 14:48:04.53653472 +0000 UTC m=+0.125023903 container exec_died a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 09:48:04 np0005623263 systemd[1]: libpod-conmon-a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e.scope: Deactivated successfully.
Feb 18 09:48:05 np0005623263 python3.9[231906]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:05 np0005623263 python3.9[232059]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Feb 18 09:48:07 np0005623263 python3.9[232222]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:07 np0005623263 systemd[1]: Started libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope.
Feb 18 09:48:07 np0005623263 podman[232223]: 2026-02-18 14:48:07.142257302 +0000 UTC m=+0.104903542 container exec 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.7, container_name=openstack_network_exporter, release=1770267347, architecture=x86_64, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7)
Feb 18 09:48:07 np0005623263 podman[232223]: 2026-02-18 14:48:07.218804261 +0000 UTC m=+0.181450401 container exec_died 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git)
Feb 18 09:48:07 np0005623263 systemd[1]: libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope: Deactivated successfully.
Feb 18 09:48:08 np0005623263 python3.9[232404]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:08 np0005623263 systemd[1]: Started libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope.
Feb 18 09:48:08 np0005623263 podman[232405]: 2026-02-18 14:48:08.187635463 +0000 UTC m=+0.090144791 container exec 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, version=9.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Feb 18 09:48:08 np0005623263 podman[232405]: 2026-02-18 14:48:08.218687696 +0000 UTC m=+0.121197004 container exec_died 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, release=1770267347, vcs-type=git, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 18 09:48:08 np0005623263 systemd[1]: libpod-conmon-4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165.scope: Deactivated successfully.
Feb 18 09:48:08 np0005623263 podman[232587]: 2026-02-18 14:48:08.91102881 +0000 UTC m=+0.096472523 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 09:48:09 np0005623263 python3.9[232589]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:09 np0005623263 python3.9[232765]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Feb 18 09:48:10 np0005623263 python3.9[232931]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:10 np0005623263 systemd[1]: Started libpod-conmon-9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.scope.
Feb 18 09:48:10 np0005623263 podman[232932]: 2026-02-18 14:48:10.870400157 +0000 UTC m=+0.124337185 container exec 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 18 09:48:10 np0005623263 podman[232932]: 2026-02-18 14:48:10.903488582 +0000 UTC m=+0.157425530 container exec_died 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 09:48:10 np0005623263 systemd[1]: libpod-conmon-9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.scope: Deactivated successfully.
Feb 18 09:48:11 np0005623263 python3.9[233117]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:11 np0005623263 systemd[1]: Started libpod-conmon-9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.scope.
Feb 18 09:48:11 np0005623263 podman[233118]: 2026-02-18 14:48:11.785657095 +0000 UTC m=+0.083737896 container exec 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 09:48:11 np0005623263 podman[233118]: 2026-02-18 14:48:11.817565189 +0000 UTC m=+0.115645990 container exec_died 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Feb 18 09:48:11 np0005623263 systemd[1]: libpod-conmon-9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22.scope: Deactivated successfully.
Feb 18 09:48:11 np0005623263 podman[233133]: 2026-02-18 14:48:11.89226474 +0000 UTC m=+0.104855601 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal)
Feb 18 09:48:12 np0005623263 python3.9[233320]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:13 np0005623263 python3.9[233473]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Feb 18 09:48:14 np0005623263 python3.9[233639]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:14 np0005623263 systemd[1]: Started libpod-conmon-a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.scope.
Feb 18 09:48:14 np0005623263 podman[233640]: 2026-02-18 14:48:14.192774573 +0000 UTC m=+0.100747785 container exec a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, managed_by=edpm_ansible, name=ubi9)
Feb 18 09:48:14 np0005623263 podman[233640]: 2026-02-18 14:48:14.225030967 +0000 UTC m=+0.133004159 container exec_died a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Feb 18 09:48:14 np0005623263 systemd[1]: libpod-conmon-a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.scope: Deactivated successfully.
Feb 18 09:48:14 np0005623263 python3.9[233822]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 18 09:48:15 np0005623263 systemd[1]: Started libpod-conmon-a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.scope.
Feb 18 09:48:15 np0005623263 podman[233823]: 2026-02-18 14:48:15.107752544 +0000 UTC m=+0.117482458 container exec a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=kepler, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public)
Feb 18 09:48:15 np0005623263 podman[233823]: 2026-02-18 14:48:15.141119877 +0000 UTC m=+0.150849761 container exec_died a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 18 09:48:15 np0005623263 systemd[1]: libpod-conmon-a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d.scope: Deactivated successfully.
Feb 18 09:48:15 np0005623263 podman[233838]: 2026-02-18 14:48:15.218177908 +0000 UTC m=+0.115534917 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 18 09:48:15 np0005623263 python3.9[234024]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:16 np0005623263 podman[234025]: 2026-02-18 14:48:16.022393296 +0000 UTC m=+0.073563693 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi)
Feb 18 09:48:16 np0005623263 python3.9[234197]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:17 np0005623263 podman[234321]: 2026-02-18 14:48:17.420206986 +0000 UTC m=+0.082687668 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, release-0.7.12=, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, version=9.4)
Feb 18 09:48:17 np0005623263 python3.9[234364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:18 np0005623263 python3.9[234494]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771426097.0905204-915-65989177045384/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:18 np0005623263 python3.9[234647]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:19 np0005623263 python3.9[234800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:20 np0005623263 python3.9[234880]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:21 np0005623263 python3.9[235033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:21 np0005623263 python3.9[235112]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fg2lwkw0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:22 np0005623263 python3.9[235265]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:22 np0005623263 python3.9[235344]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:23 np0005623263 python3.9[235497]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:48:24 np0005623263 python3[235651]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 18 09:48:24 np0005623263 podman[235730]: 2026-02-18 14:48:24.758765653 +0000 UTC m=+0.081929869 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 09:48:25 np0005623263 python3.9[235830]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:25 np0005623263 python3.9[235909]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:26 np0005623263 podman[235910]: 2026-02-18 14:48:26.043369238 +0000 UTC m=+0.099090582 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 09:48:26 np0005623263 python3.9[236081]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:27 np0005623263 python3.9[236160]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:27 np0005623263 python3.9[236313]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:27 np0005623263 podman[236316]: 2026-02-18 14:48:27.972248275 +0000 UTC m=+0.113362771 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller)
Feb 18 09:48:28 np0005623263 python3.9[236417]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:28 np0005623263 python3.9[236570]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:29 np0005623263 python3.9[236649]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:29 np0005623263 podman[204930]: time="2026-02-18T14:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 09:48:29 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 09:48:29 np0005623263 podman[204930]: @ - - [18/Feb/2026:14:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3871 "" "Go-http-client/1.1"
Feb 18 09:48:30 np0005623263 python3.9[236802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:30 np0005623263 python3.9[236928]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771426109.5571034-1040-93802017295266/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:31 np0005623263 openstack_network_exporter[208107]: ERROR   14:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 09:48:31 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:48:31 np0005623263 openstack_network_exporter[208107]: ERROR   14:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 09:48:31 np0005623263 openstack_network_exporter[208107]: 
Feb 18 09:48:31 np0005623263 python3.9[237081]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:32 np0005623263 python3.9[237234]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:48:33 np0005623263 python3.9[237390]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:33 np0005623263 nova_compute[189016]: 2026-02-18 14:48:33.318 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:33 np0005623263 nova_compute[189016]: 2026-02-18 14:48:33.319 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:33 np0005623263 nova_compute[189016]: 2026-02-18 14:48:33.339 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:33 np0005623263 nova_compute[189016]: 2026-02-18 14:48:33.340 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 09:48:33 np0005623263 nova_compute[189016]: 2026-02-18 14:48:33.340 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 09:48:33 np0005623263 nova_compute[189016]: 2026-02-18 14:48:33.359 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 09:48:33 np0005623263 python3.9[237543]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:48:34 np0005623263 python3.9[237697]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 18 09:48:35 np0005623263 nova_compute[189016]: 2026-02-18 14:48:35.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:35 np0005623263 nova_compute[189016]: 2026-02-18 14:48:35.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:35 np0005623263 python3.9[237852]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 09:48:36 np0005623263 python3.9[238008]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:36 np0005623263 nova_compute[189016]: 2026-02-18 14:48:36.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:36 np0005623263 nova_compute[189016]: 2026-02-18 14:48:36.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:36 np0005623263 nova_compute[189016]: 2026-02-18 14:48:36.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:36 np0005623263 nova_compute[189016]: 2026-02-18 14:48:36.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:36 np0005623263 nova_compute[189016]: 2026-02-18 14:48:36.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 09:48:36 np0005623263 systemd[1]: session-26.scope: Deactivated successfully.
Feb 18 09:48:36 np0005623263 systemd[1]: session-26.scope: Consumed 1min 14.460s CPU time.
Feb 18 09:48:36 np0005623263 systemd-logind[831]: Session 26 logged out. Waiting for processes to exit.
Feb 18 09:48:36 np0005623263 systemd-logind[831]: Removed session 26.
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.081 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.082 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.082 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.082 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.398 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.399 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5628MB free_disk=72.29989242553711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.399 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.399 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.458 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.458 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.491 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.514 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.517 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 09:48:37 np0005623263 nova_compute[189016]: 2026-02-18 14:48:37.517 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:48:39 np0005623263 podman[238035]: 2026-02-18 14:48:39.734338192 +0000 UTC m=+0.064306353 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 09:48:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:48:41.414 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 09:48:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:48:41.416 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 09:48:41 np0005623263 ovn_metadata_agent[108395]: 2026-02-18 14:48:41.417 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 09:48:41 np0005623263 systemd-logind[831]: New session 27 of user zuul.
Feb 18 09:48:41 np0005623263 systemd[1]: Started Session 27 of User zuul.
Feb 18 09:48:42 np0005623263 podman[238185]: 2026-02-18 14:48:42.60643337 +0000 UTC m=+0.073546442 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., version=9.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 18 09:48:42 np0005623263 python3.9[238224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 09:48:44 np0005623263 python3.9[238389]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Feb 18 09:48:45 np0005623263 python3.9[238543]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 18 09:48:45 np0005623263 podman[238552]: 2026-02-18 14:48:45.492348285 +0000 UTC m=+0.084964948 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 18 09:48:46 np0005623263 python3.9[238646]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 18 09:48:46 np0005623263 podman[238648]: 2026-02-18 14:48:46.759477428 +0000 UTC m=+0.084106445 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 09:48:47 np0005623263 podman[238669]: 2026-02-18 14:48:47.741296416 +0000 UTC m=+0.070890654 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Feb 18 09:48:49 np0005623263 python3.9[238845]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:50 np0005623263 python3.9[238972]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771426128.8500407-49-256959724391071/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:51 np0005623263 python3.9[239125]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 09:48:51 np0005623263 python3.9[239278]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 18 09:48:52 np0005623263 python3.9[239402]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771426131.4259052-72-57231590178414/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 18 14:48:53 compute-0 python3.9[239555]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 18 14:48:53 compute-0 systemd[1]: Stopping System Logging Service...
Feb 18 14:48:53 compute-0 rsyslogd[1015]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1015" x-info="https://www.rsyslog.com"] exiting on signal 15.
Feb 18 14:48:53 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Feb 18 14:48:53 compute-0 systemd[1]: Stopped System Logging Service.
Feb 18 14:48:53 compute-0 systemd[1]: rsyslog.service: Consumed 3.821s CPU time, 8.0M memory peak, read 0B from disk, written 6.1M to disk.
Feb 18 14:48:53 compute-0 systemd[1]: Starting System Logging Service...
Feb 18 14:48:53 compute-0 rsyslogd[239561]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="239561" x-info="https://www.rsyslog.com"] start
Feb 18 14:48:53 compute-0 systemd[1]: Started System Logging Service.
Feb 18 14:48:53 compute-0 rsyslogd[239561]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 14:48:53 compute-0 rsyslogd[239561]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Feb 18 14:48:53 compute-0 rsyslogd[239561]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Feb 18 14:48:53 compute-0 rsyslogd[239561]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Feb 18 14:48:53 compute-0 rsyslogd[239561]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Feb 18 14:48:54 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Feb 18 14:48:54 compute-0 systemd[1]: session-27.scope: Consumed 9.153s CPU time.
Feb 18 14:48:54 compute-0 systemd-logind[831]: Session 27 logged out. Waiting for processes to exit.
Feb 18 14:48:54 compute-0 systemd-logind[831]: Removed session 27.
Feb 18 14:48:55 compute-0 podman[239591]: 2026-02-18 14:48:55.767682106 +0000 UTC m=+0.098277779 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:48:56 compute-0 podman[239615]: 2026-02-18 14:48:56.758251557 +0000 UTC m=+0.080180028 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Feb 18 14:48:58 compute-0 podman[239634]: 2026-02-18 14:48:58.811424508 +0000 UTC m=+0.140912396 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 18 14:48:59 compute-0 podman[204930]: time="2026-02-18T14:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:48:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:48:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3871 "" "Go-http-client/1.1"
Feb 18 14:49:01 compute-0 openstack_network_exporter[208107]: ERROR   14:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:49:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:49:01 compute-0 openstack_network_exporter[208107]: ERROR   14:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:49:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:49:10 compute-0 podman[239658]: 2026-02-18 14:49:10.744379253 +0000 UTC m=+0.069248177 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:49:12 compute-0 podman[239682]: 2026-02-18 14:49:12.748147054 +0000 UTC m=+0.075693753 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z)
Feb 18 14:49:15 compute-0 podman[239702]: 2026-02-18 14:49:15.742786113 +0000 UTC m=+0.072754018 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 14:49:17 compute-0 podman[239719]: 2026-02-18 14:49:17.754368083 +0000 UTC m=+0.083332938 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 18 14:49:17 compute-0 podman[239739]: 2026-02-18 14:49:17.850689761 +0000 UTC m=+0.060252459 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, release-0.7.12=, vcs-type=git, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.183 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.185 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.185 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.189 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.190 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.190 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.190 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.190 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.192 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.194 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.198 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78ddcdf590>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': [], 'cpu': [], 'disk.device.write.latency': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes': [], 'disk.root.size': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.201 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:49:25.207 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:49:26 compute-0 podman[239761]: 2026-02-18 14:49:26.733409087 +0000 UTC m=+0.064302233 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:49:27 compute-0 podman[239785]: 2026-02-18 14:49:27.777163445 +0000 UTC m=+0.102789584 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 18 14:49:29 compute-0 podman[204930]: time="2026-02-18T14:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:49:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:49:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3869 "" "Go-http-client/1.1"
Feb 18 14:49:29 compute-0 podman[239805]: 2026-02-18 14:49:29.816504133 +0000 UTC m=+0.139336937 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 18 14:49:31 compute-0 openstack_network_exporter[208107]: ERROR   14:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:49:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:49:31 compute-0 openstack_network_exporter[208107]: ERROR   14:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:49:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:49:32 compute-0 nova_compute[189016]: 2026-02-18 14:49:32.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:32 compute-0 nova_compute[189016]: 2026-02-18 14:49:32.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 18 14:49:32 compute-0 nova_compute[189016]: 2026-02-18 14:49:32.070 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 18 14:49:32 compute-0 nova_compute[189016]: 2026-02-18 14:49:32.073 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:32 compute-0 nova_compute[189016]: 2026-02-18 14:49:32.073 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 18 14:49:32 compute-0 nova_compute[189016]: 2026-02-18 14:49:32.093 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:33 compute-0 nova_compute[189016]: 2026-02-18 14:49:33.104 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:33 compute-0 nova_compute[189016]: 2026-02-18 14:49:33.104 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:33 compute-0 nova_compute[189016]: 2026-02-18 14:49:33.104 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:49:33 compute-0 nova_compute[189016]: 2026-02-18 14:49:33.104 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:49:33 compute-0 nova_compute[189016]: 2026-02-18 14:49:33.141 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 14:49:33 compute-0 systemd-logind[831]: New session 28 of user zuul.
Feb 18 14:49:33 compute-0 systemd[1]: Started Session 28 of User zuul.
Feb 18 14:49:34 compute-0 python3[240007]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 14:49:35 compute-0 python3[240231]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 14:49:36 compute-0 nova_compute[189016]: 2026-02-18 14:49:36.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:36 compute-0 nova_compute[189016]: 2026-02-18 14:49:36.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:36 compute-0 nova_compute[189016]: 2026-02-18 14:49:36.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:36 compute-0 nova_compute[189016]: 2026-02-18 14:49:36.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:36 compute-0 nova_compute[189016]: 2026-02-18 14:49:36.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:49:36 compute-0 python3[240385]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.097 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.097 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.097 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.097 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.430 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.432 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5684MB free_disk=72.30064392089844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.432 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.432 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.546 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.547 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.643 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing inventories for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.715 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating ProviderTree inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.716 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.735 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing aggregate associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.765 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing trait associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, traits: HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.786 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.801 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.802 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:49:37 compute-0 nova_compute[189016]: 2026-02-18 14:49:37.803 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:49:39 compute-0 python3[240536]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 18 14:49:40 compute-0 python3[240690]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 18 14:49:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:49:41.416 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:49:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:49:41.419 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:49:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:49:41.419 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:49:41 compute-0 podman[240765]: 2026-02-18 14:49:41.643456342 +0000 UTC m=+0.091055835 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:49:42 compute-0 python3[240939]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 14:49:43 compute-0 podman[241077]: 2026-02-18 14:49:43.229367118 +0000 UTC m=+0.063894632 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, managed_by=edpm_ansible, name=ubi9/ubi-minimal)
Feb 18 14:49:43 compute-0 python3[241125]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 14:49:46 compute-0 podman[241164]: 2026-02-18 14:49:46.729905579 +0000 UTC m=+0.059258933 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 18 14:49:48 compute-0 podman[241183]: 2026-02-18 14:49:48.778661568 +0000 UTC m=+0.093365614 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 18 14:49:48 compute-0 podman[241184]: 2026-02-18 14:49:48.798678239 +0000 UTC m=+0.111239470 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, config_id=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 18 14:49:57 compute-0 podman[241222]: 2026-02-18 14:49:57.736003334 +0000 UTC m=+0.066218723 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:49:58 compute-0 podman[241244]: 2026-02-18 14:49:58.749423686 +0000 UTC m=+0.075231705 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 18 14:49:59 compute-0 podman[204930]: time="2026-02-18T14:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:49:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:49:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3875 "" "Go-http-client/1.1"
Feb 18 14:50:00 compute-0 podman[241264]: 2026-02-18 14:50:00.767391238 +0000 UTC m=+0.098303958 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 18 14:50:01 compute-0 openstack_network_exporter[208107]: ERROR   14:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:50:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:50:01 compute-0 openstack_network_exporter[208107]: ERROR   14:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:50:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:50:12 compute-0 podman[241289]: 2026-02-18 14:50:12.725585312 +0000 UTC m=+0.054307678 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 14:50:13 compute-0 podman[241313]: 2026-02-18 14:50:13.759706895 +0000 UTC m=+0.093467514 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9/ubi-minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, vendor=Red Hat, Inc., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-type=git)
Feb 18 14:50:17 compute-0 podman[241333]: 2026-02-18 14:50:17.725882728 +0000 UTC m=+0.054404750 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 14:50:19 compute-0 podman[241352]: 2026-02-18 14:50:19.781685843 +0000 UTC m=+0.101269144 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release-0.7.12=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, distribution-scope=public)
Feb 18 14:50:19 compute-0 podman[241351]: 2026-02-18 14:50:19.783830418 +0000 UTC m=+0.110582094 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Feb 18 14:50:28 compute-0 podman[241389]: 2026-02-18 14:50:28.758349025 +0000 UTC m=+0.083754594 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:50:28 compute-0 podman[241413]: 2026-02-18 14:50:28.858778897 +0000 UTC m=+0.069291392 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Feb 18 14:50:29 compute-0 podman[204930]: time="2026-02-18T14:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:50:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:50:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3875 "" "Go-http-client/1.1"
Feb 18 14:50:31 compute-0 openstack_network_exporter[208107]: ERROR   14:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:50:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:50:31 compute-0 openstack_network_exporter[208107]: ERROR   14:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:50:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:50:31 compute-0 podman[241434]: 2026-02-18 14:50:31.755921249 +0000 UTC m=+0.084209505 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 18 14:50:33 compute-0 nova_compute[189016]: 2026-02-18 14:50:33.796 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:33 compute-0 nova_compute[189016]: 2026-02-18 14:50:33.816 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:33 compute-0 nova_compute[189016]: 2026-02-18 14:50:33.817 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:50:33 compute-0 nova_compute[189016]: 2026-02-18 14:50:33.817 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:50:33 compute-0 nova_compute[189016]: 2026-02-18 14:50:33.831 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 14:50:34 compute-0 nova_compute[189016]: 2026-02-18 14:50:34.080 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:37 compute-0 nova_compute[189016]: 2026-02-18 14:50:37.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:37 compute-0 nova_compute[189016]: 2026-02-18 14:50:37.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:37 compute-0 nova_compute[189016]: 2026-02-18 14:50:37.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:37 compute-0 nova_compute[189016]: 2026-02-18 14:50:37.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:50:38 compute-0 nova_compute[189016]: 2026-02-18 14:50:38.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:38 compute-0 nova_compute[189016]: 2026-02-18 14:50:38.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.092 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.092 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.093 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.093 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.427 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.428 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5684MB free_disk=72.30064010620117GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.428 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.428 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.503 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.503 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.529 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.543 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.544 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:50:39 compute-0 nova_compute[189016]: 2026-02-18 14:50:39.544 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:50:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:50:41.416 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:50:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:50:41.417 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:50:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:50:41.417 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:50:42 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Feb 18 14:50:42 compute-0 systemd[1]: session-28.scope: Consumed 7.979s CPU time.
Feb 18 14:50:42 compute-0 systemd-logind[831]: Session 28 logged out. Waiting for processes to exit.
Feb 18 14:50:42 compute-0 systemd-logind[831]: Removed session 28.
Feb 18 14:50:43 compute-0 podman[241460]: 2026-02-18 14:50:43.727860455 +0000 UTC m=+0.054787729 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:50:44 compute-0 podman[241483]: 2026-02-18 14:50:44.723378686 +0000 UTC m=+0.056516324 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, release=1770267347, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 18 14:50:48 compute-0 podman[241502]: 2026-02-18 14:50:48.746256947 +0000 UTC m=+0.078900279 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 18 14:50:50 compute-0 podman[241523]: 2026-02-18 14:50:50.741334962 +0000 UTC m=+0.068003180 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, architecture=x86_64, config_id=kepler, com.redhat.component=ubi9-container, container_name=kepler)
Feb 18 14:50:50 compute-0 podman[241522]: 2026-02-18 14:50:50.752206141 +0000 UTC m=+0.083756374 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 14:50:59 compute-0 podman[241561]: 2026-02-18 14:50:59.73647883 +0000 UTC m=+0.064200082 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 18 14:50:59 compute-0 podman[241562]: 2026-02-18 14:50:59.744749673 +0000 UTC m=+0.067915908 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:50:59 compute-0 podman[204930]: time="2026-02-18T14:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:50:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:50:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3875 "" "Go-http-client/1.1"
Feb 18 14:51:01 compute-0 openstack_network_exporter[208107]: ERROR   14:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:51:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:51:01 compute-0 openstack_network_exporter[208107]: ERROR   14:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:51:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:51:02 compute-0 podman[241601]: 2026-02-18 14:51:02.782189822 +0000 UTC m=+0.114591147 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Feb 18 14:51:14 compute-0 podman[241629]: 2026-02-18 14:51:14.730287603 +0000 UTC m=+0.054640976 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 18 14:51:14 compute-0 podman[241653]: 2026-02-18 14:51:14.825370839 +0000 UTC m=+0.065158237 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.7, io.buildah.version=1.33.7)
Feb 18 14:51:19 compute-0 podman[241676]: 2026-02-18 14:51:19.724574774 +0000 UTC m=+0.056210316 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 18 14:51:21 compute-0 podman[241694]: 2026-02-18 14:51:21.742290812 +0000 UTC m=+0.073657945 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 18 14:51:21 compute-0 podman[241695]: 2026-02-18 14:51:21.74686535 +0000 UTC m=+0.073304137 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vcs-type=git, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.184 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.185 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.185 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.187 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.188 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.188 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.188 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.188 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.189 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.188 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.190 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.190 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.190 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.189 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.191 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.192 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.193 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dc8b1c40>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.195 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.196 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.198 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.199 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.199 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.199 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.200 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.200 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.201 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.201 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.201 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.202 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.203 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:51:25.204 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:51:29 compute-0 podman[204930]: time="2026-02-18T14:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:51:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:51:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3875 "" "Go-http-client/1.1"
Feb 18 14:51:30 compute-0 podman[241734]: 2026-02-18 14:51:30.720487161 +0000 UTC m=+0.051109035 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 14:51:30 compute-0 podman[241733]: 2026-02-18 14:51:30.729601546 +0000 UTC m=+0.061471872 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 18 14:51:31 compute-0 openstack_network_exporter[208107]: ERROR   14:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:51:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:51:31 compute-0 openstack_network_exporter[208107]: ERROR   14:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:51:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:51:33 compute-0 nova_compute[189016]: 2026-02-18 14:51:33.545 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:33 compute-0 nova_compute[189016]: 2026-02-18 14:51:33.546 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:51:33 compute-0 nova_compute[189016]: 2026-02-18 14:51:33.546 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:51:33 compute-0 nova_compute[189016]: 2026-02-18 14:51:33.569 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 14:51:33 compute-0 podman[241775]: 2026-02-18 14:51:33.767963488 +0000 UTC m=+0.099082109 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 18 14:51:34 compute-0 nova_compute[189016]: 2026-02-18 14:51:34.068 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:38 compute-0 nova_compute[189016]: 2026-02-18 14:51:38.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:38 compute-0 nova_compute[189016]: 2026-02-18 14:51:38.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:38 compute-0 nova_compute[189016]: 2026-02-18 14:51:38.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.081 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.396 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.398 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5682MB free_disk=72.3006591796875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.398 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.399 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.459 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.459 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.483 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.495 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.496 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:51:39 compute-0 nova_compute[189016]: 2026-02-18 14:51:39.496 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:51:39 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:51:39.673 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 14:51:39 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:51:39.675 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 14:51:39 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:51:39.679 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:51:40 compute-0 nova_compute[189016]: 2026-02-18 14:51:40.497 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:51:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:51:41.418 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:51:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:51:41.419 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:51:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:51:41.419 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:51:45 compute-0 podman[241801]: 2026-02-18 14:51:45.72892584 +0000 UTC m=+0.058423774 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 14:51:45 compute-0 podman[241802]: 2026-02-18 14:51:45.752347342 +0000 UTC m=+0.081456036 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., release=1770267347, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.expose-services=, version=9.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, architecture=x86_64, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 18 14:51:50 compute-0 podman[241849]: 2026-02-18 14:51:50.752840352 +0000 UTC m=+0.080414959 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 18 14:51:52 compute-0 podman[241867]: 2026-02-18 14:51:52.731720129 +0000 UTC m=+0.058573807 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 18 14:51:52 compute-0 podman[241868]: 2026-02-18 14:51:52.769696776 +0000 UTC m=+0.087764408 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, release=1214.1726694543, architecture=x86_64, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9)
Feb 18 14:51:59 compute-0 podman[204930]: time="2026-02-18T14:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:51:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:51:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3882 "" "Go-http-client/1.1"
Feb 18 14:52:01 compute-0 openstack_network_exporter[208107]: ERROR   14:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:52:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:52:01 compute-0 openstack_network_exporter[208107]: ERROR   14:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:52:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:52:01 compute-0 podman[241907]: 2026-02-18 14:52:01.872680754 +0000 UTC m=+0.055903299 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 18 14:52:01 compute-0 podman[241909]: 2026-02-18 14:52:01.884323763 +0000 UTC m=+0.057958951 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 14:52:04 compute-0 podman[241949]: 2026-02-18 14:52:04.798914833 +0000 UTC m=+0.130383836 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 14:52:16 compute-0 podman[241975]: 2026-02-18 14:52:16.765501951 +0000 UTC m=+0.085087819 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:52:16 compute-0 podman[241976]: 2026-02-18 14:52:16.778710358 +0000 UTC m=+0.092193191 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, version=9.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Feb 18 14:52:21 compute-0 podman[242018]: 2026-02-18 14:52:21.727455707 +0000 UTC m=+0.059737924 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 18 14:52:23 compute-0 podman[242037]: 2026-02-18 14:52:23.748167737 +0000 UTC m=+0.081942860 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 14:52:23 compute-0 podman[242038]: 2026-02-18 14:52:23.789390288 +0000 UTC m=+0.113451713 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, container_name=kepler, name=ubi9, release=1214.1726694543, release-0.7.12=, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 14:52:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:28.476 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 14:52:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:28.477 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 14:52:29 compute-0 podman[204930]: time="2026-02-18T14:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:52:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 14:52:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3870 "" "Go-http-client/1.1"
Feb 18 14:52:30 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:30.479 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:52:31 compute-0 openstack_network_exporter[208107]: ERROR   14:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:52:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:52:31 compute-0 openstack_network_exporter[208107]: ERROR   14:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:52:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:52:32 compute-0 podman[242078]: 2026-02-18 14:52:32.72513062 +0000 UTC m=+0.056358268 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Feb 18 14:52:32 compute-0 podman[242079]: 2026-02-18 14:52:32.749690566 +0000 UTC m=+0.069731548 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 14:52:34 compute-0 nova_compute[189016]: 2026-02-18 14:52:34.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:34 compute-0 nova_compute[189016]: 2026-02-18 14:52:34.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:52:34 compute-0 nova_compute[189016]: 2026-02-18 14:52:34.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:52:34 compute-0 nova_compute[189016]: 2026-02-18 14:52:34.064 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 14:52:35 compute-0 podman[242121]: 2026-02-18 14:52:35.752783848 +0000 UTC m=+0.086243429 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 18 14:52:36 compute-0 nova_compute[189016]: 2026-02-18 14:52:36.058 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:37 compute-0 nova_compute[189016]: 2026-02-18 14:52:37.046 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.498 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.498 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.523 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.638 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.639 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.648 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.648 189020 INFO nova.compute.claims [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.781 189020 DEBUG nova.compute.provider_tree [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.795 189020 DEBUG nova.scheduler.client.report [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.817 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.818 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.866 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.867 189020 DEBUG nova.network.neutron [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.889 189020 INFO nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 14:52:38 compute-0 nova_compute[189016]: 2026-02-18 14:52:38.925 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.004 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.006 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.007 189020 INFO nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Creating image(s)#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.008 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.008 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.009 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.010 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "8e446fd4a49ba04578b223406ce2c408026401e6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.011 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.049 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.524 189020 WARNING oslo_policy.policy [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb 18 14:52:39 compute-0 nova_compute[189016]: 2026-02-18 14:52:39.525 189020 WARNING oslo_policy.policy [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.286 189020 DEBUG nova.network.neutron [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Successfully created port: 15d6e821-445c-43a7-a37c-e5f1566673fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.334 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.418 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.part --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.419 189020 DEBUG nova.virt.images [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] 7cc2a96a-1e6c-474d-b671-0e2626bf4158 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.421 189020 DEBUG nova.privsep.utils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.421 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.part /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.596 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.part /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.converted" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.599 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.654 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6.converted --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.655 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:40 compute-0 nova_compute[189016]: 2026-02-18 14:52:40.667 189020 INFO oslo.privsep.daemon [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp13g2_yd5/privsep.sock']#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.076 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.076 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.076 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.076 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.359 189020 INFO oslo.privsep.daemon [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.232 242172 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.236 242172 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.238 242172 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.238 242172 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242172#033[00m
Feb 18 14:52:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:41.419 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:41.421 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:41.422 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.422 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.424 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5642MB free_disk=72.27326965332031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.424 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.424 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.468 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.524 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.524 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.525 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.535 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.536 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "8e446fd4a49ba04578b223406ce2c408026401e6" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.537 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.555 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.578 189020 DEBUG nova.network.neutron [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Successfully updated port: 15d6e821-445c-43a7-a37c-e5f1566673fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.595 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.597 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.597 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.597 189020 DEBUG nova.network.neutron [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.629 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.632 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.658 189020 ERROR nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [req-72e53fa0-5ec1-41f1-acb1-4da313702f1b] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 7d5f91f3-cf81-4de6-86b4-ce92bbe09380.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-72e53fa0-5ec1-41f1-acb1-4da313702f1b"}]}#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.662 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk 1073741824" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.663 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.663 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.680 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing inventories for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.697 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating ProviderTree inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.698 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.709 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.711 189020 DEBUG nova.virt.disk.api [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Checking if we can resize image /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.712 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.727 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing aggregate associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.761 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing trait associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, traits: HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.768 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.769 189020 DEBUG nova.virt.disk.api [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Cannot resize image /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.770 189020 DEBUG nova.objects.instance [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'migration_context' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.805 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.807 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.808 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.808 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.809 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.810 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.834 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.836 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.865 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.866 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.879 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.947 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.949 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.949 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:41 compute-0 nova_compute[189016]: 2026-02-18 14:52:41.964 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.017 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.020 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.055 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.056 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.057 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.079 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.115 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.116 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.116 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Ensure instance console log exists: /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.117 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.117 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.117 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.157 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updated inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.158 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.158 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.184 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.185 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.221 189020 DEBUG nova.network.neutron [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.276 189020 DEBUG nova.compute.manager [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-changed-15d6e821-445c-43a7-a37c-e5f1566673fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.277 189020 DEBUG nova.compute.manager [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Refreshing instance network info cache due to event network-changed-15d6e821-445c-43a7-a37c-e5f1566673fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.277 189020 DEBUG oslo_concurrency.lockutils [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.908 189020 DEBUG nova.network.neutron [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.930 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.931 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Instance network_info: |[{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.932 189020 DEBUG oslo_concurrency.lockutils [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.932 189020 DEBUG nova.network.neutron [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Refreshing network info cache for port 15d6e821-445c-43a7-a37c-e5f1566673fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.939 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Start _get_guest_xml network_info=[{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}], 'ephemerals': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 1, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.947 189020 WARNING nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.953 189020 DEBUG nova.virt.libvirt.host [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.954 189020 DEBUG nova.virt.libvirt.host [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.962 189020 DEBUG nova.virt.libvirt.host [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.963 189020 DEBUG nova.virt.libvirt.host [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.964 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.964 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T14:51:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='23e98520-0527-4596-8420-5ff1feeb3155',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.965 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.965 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.965 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.966 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.966 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.966 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.967 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.967 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.967 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.967 189020 DEBUG nova.virt.hardware [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.974 189020 DEBUG nova.privsep.utils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.976 189020 DEBUG nova.virt.libvirt.vif [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T14:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-kc9s0061',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T14:52:38Z,user_data=None,user_id='387d978e2b494e88ad13abae2a83321d',uuid=debb3011-9258-4f04-9eb4-592cc56eb3eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.976 189020 DEBUG nova.network.os_vif_util [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.978 189020 DEBUG nova.network.os_vif_util [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:55:86,bridge_name='br-int',has_traffic_filtering=True,id=15d6e821-445c-43a7-a37c-e5f1566673fe,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15d6e821-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.980 189020 DEBUG nova.objects.instance [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:52:42 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.997 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] End _get_guest_xml xml=<domain type="kvm">
Feb 18 14:52:42 compute-0 nova_compute[189016]:  <uuid>debb3011-9258-4f04-9eb4-592cc56eb3eb</uuid>
Feb 18 14:52:42 compute-0 nova_compute[189016]:  <name>instance-00000001</name>
Feb 18 14:52:42 compute-0 nova_compute[189016]:  <memory>524288</memory>
Feb 18 14:52:42 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 14:52:42 compute-0 nova_compute[189016]:  <metadata>
Feb 18 14:52:42 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 14:52:42 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 14:52:42 compute-0 nova_compute[189016]:      <nova:name>test_0</nova:name>
Feb 18 14:52:42 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 14:52:42</nova:creationTime>
Feb 18 14:52:42 compute-0 nova_compute[189016]:      <nova:flavor name="m1.small">
Feb 18 14:52:42 compute-0 nova_compute[189016]:        <nova:memory>512</nova:memory>
Feb 18 14:52:42 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 14:52:43 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 14:52:43 compute-0 nova_compute[189016]:        <nova:ephemeral>1</nova:ephemeral>
Feb 18 14:52:43 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 14:52:43 compute-0 nova_compute[189016]:        <nova:user uuid="387d978e2b494e88ad13abae2a83321d">admin</nova:user>
Feb 18 14:52:43 compute-0 nova_compute[189016]:        <nova:project uuid="71c6c5d63b07447388ace322f081ffc3">admin</nova:project>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="7cc2a96a-1e6c-474d-b671-0e2626bf4158"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 14:52:43 compute-0 nova_compute[189016]:        <nova:port uuid="15d6e821-445c-43a7-a37c-e5f1566673fe">
Feb 18 14:52:43 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="192.168.0.87" ipVersion="4"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  </metadata>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <system>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <entry name="serial">debb3011-9258-4f04-9eb4-592cc56eb3eb</entry>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <entry name="uuid">debb3011-9258-4f04-9eb4-592cc56eb3eb</entry>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </system>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  <os>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  </os>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  <features>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <apic/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  </features>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  </clock>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  </cpu>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  <devices>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <target dev="vdb" bus="virtio"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.config"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:c4:55:86"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <target dev="tap15d6e821-44"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </interface>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/console.log" append="off"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </serial>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <video>
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </video>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </rng>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 14:52:43 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 14:52:43 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 14:52:43 compute-0 nova_compute[189016]:  </devices>
Feb 18 14:52:43 compute-0 nova_compute[189016]: </domain>
Feb 18 14:52:43 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.998 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Preparing to wait for external event network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.999 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.999 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:42.999 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.000 189020 DEBUG nova.virt.libvirt.vif [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T14:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-kc9s0061',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T14:52:38Z,user_data=None,user_id='387d978e2b494e88ad13abae2a83321d',uuid=debb3011-9258-4f04-9eb4-592cc56eb3eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.001 189020 DEBUG nova.network.os_vif_util [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.001 189020 DEBUG nova.network.os_vif_util [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:55:86,bridge_name='br-int',has_traffic_filtering=True,id=15d6e821-445c-43a7-a37c-e5f1566673fe,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15d6e821-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.002 189020 DEBUG os_vif [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:55:86,bridge_name='br-int',has_traffic_filtering=True,id=15d6e821-445c-43a7-a37c-e5f1566673fe,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15d6e821-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.038 189020 DEBUG ovsdbapp.backend.ovs_idl [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.039 189020 DEBUG ovsdbapp.backend.ovs_idl [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.039 189020 DEBUG ovsdbapp.backend.ovs_idl [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.040 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.041 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.041 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.042 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.044 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.046 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.055 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.056 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.056 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.057 189020 INFO oslo.privsep.daemon [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpmq_z1wgi/privsep.sock']#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.808 189020 INFO oslo.privsep.daemon [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.607 242210 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.611 242210 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.613 242210 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Feb 18 14:52:43 compute-0 nova_compute[189016]: 2026-02-18 14:52:43.613 242210 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242210#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.094 189020 DEBUG nova.network.neutron [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated VIF entry in instance network info cache for port 15d6e821-445c-43a7-a37c-e5f1566673fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.094 189020 DEBUG nova.network.neutron [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.110 189020 DEBUG oslo_concurrency.lockutils [req-6cf10fb5-31e7-4efd-a357-614ade786843 req-85764b1f-858c-41c5-8bd3-af314f969dfa af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.126 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.127 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15d6e821-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.128 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15d6e821-44, col_values=(('external_ids', {'iface-id': '15d6e821-445c-43a7-a37c-e5f1566673fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:55:86', 'vm-uuid': 'debb3011-9258-4f04-9eb4-592cc56eb3eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.132 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:44 compute-0 NetworkManager[57258]: <info>  [1771426364.1354] manager: (tap15d6e821-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.137 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.144 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.146 189020 INFO os_vif [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:55:86,bridge_name='br-int',has_traffic_filtering=True,id=15d6e821-445c-43a7-a37c-e5f1566673fe,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15d6e821-44')#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.198 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.198 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.199 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.199 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No VIF found with MAC fa:16:3e:c4:55:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.200 189020 INFO nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Using config drive#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.778 189020 INFO nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Creating config drive at /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.config#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.787 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpay0e4q31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:52:44 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.913 189020 DEBUG oslo_concurrency.processutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpay0e4q31" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:52:44 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 18 14:52:44 compute-0 kernel: tap15d6e821-44: entered promiscuous mode
Feb 18 14:52:44 compute-0 NetworkManager[57258]: <info>  [1771426364.9976] manager: (tap15d6e821-44): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:44.998 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:45 compute-0 ovn_controller[99062]: 2026-02-18T14:52:44Z|00027|binding|INFO|Claiming lport 15d6e821-445c-43a7-a37c-e5f1566673fe for this chassis.
Feb 18 14:52:45 compute-0 ovn_controller[99062]: 2026-02-18T14:52:44Z|00028|binding|INFO|15d6e821-445c-43a7-a37c-e5f1566673fe: Claiming fa:16:3e:c4:55:86 192.168.0.87
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.019 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:55:86 192.168.0.87'], port_security=['fa:16:3e:c4:55:86 192.168.0.87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.87/24', 'neutron:device_id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=15d6e821-445c-43a7-a37c-e5f1566673fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.021 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 15d6e821-445c-43a7-a37c-e5f1566673fe in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 bound to our chassis#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.024 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c269c00a-f738-4cb6-ac67-09050c56f9f2#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.027 108400 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpqgc49u7j/privsep.sock']#033[00m
Feb 18 14:52:45 compute-0 systemd-udevd[242238]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.044 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:45 compute-0 NetworkManager[57258]: <info>  [1771426365.0515] device (tap15d6e821-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 14:52:45 compute-0 NetworkManager[57258]: <info>  [1771426365.0526] device (tap15d6e821-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 14:52:45 compute-0 ovn_controller[99062]: 2026-02-18T14:52:45Z|00029|binding|INFO|Setting lport 15d6e821-445c-43a7-a37c-e5f1566673fe ovn-installed in OVS
Feb 18 14:52:45 compute-0 ovn_controller[99062]: 2026-02-18T14:52:45Z|00030|binding|INFO|Setting lport 15d6e821-445c-43a7-a37c-e5f1566673fe up in Southbound
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.056 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:45 compute-0 systemd-machined[158361]: New machine qemu-1-instance-00000001.
Feb 18 14:52:45 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.594 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426365.5918293, debb3011-9258-4f04-9eb4-592cc56eb3eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.594 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] VM Started (Lifecycle Event)#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.629 189020 DEBUG nova.compute.manager [req-e5d882d9-36eb-4c60-ac0a-4c37b6047149 req-bcddd720-af25-4015-8a51-229d913f7d90 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.630 189020 DEBUG oslo_concurrency.lockutils [req-e5d882d9-36eb-4c60-ac0a-4c37b6047149 req-bcddd720-af25-4015-8a51-229d913f7d90 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.631 189020 DEBUG oslo_concurrency.lockutils [req-e5d882d9-36eb-4c60-ac0a-4c37b6047149 req-bcddd720-af25-4015-8a51-229d913f7d90 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.631 189020 DEBUG oslo_concurrency.lockutils [req-e5d882d9-36eb-4c60-ac0a-4c37b6047149 req-bcddd720-af25-4015-8a51-229d913f7d90 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.631 189020 DEBUG nova.compute.manager [req-e5d882d9-36eb-4c60-ac0a-4c37b6047149 req-bcddd720-af25-4015-8a51-229d913f7d90 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Processing event network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.633 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.633 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.646 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.650 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.655 189020 INFO nova.virt.libvirt.driver [-] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Instance spawned successfully.#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.656 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.700 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.700 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426365.5921702, debb3011-9258-4f04-9eb4-592cc56eb3eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.701 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] VM Paused (Lifecycle Event)#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.719 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.719 108400 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.719 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.720 108400 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqgc49u7j/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.720 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.721 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.589 242262 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.721 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.594 242262 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.596 242262 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.722 189020 DEBUG nova.virt.libvirt.driver [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.596 242262 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242262#033[00m
Feb 18 14:52:45 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:45.724 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca952e8-f201-45b4-9c8f-95dcb8a49e16]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.725 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.730 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426365.6457267, debb3011-9258-4f04-9eb4-592cc56eb3eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.730 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] VM Resumed (Lifecycle Event)#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.754 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.759 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.785 189020 INFO nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Took 6.78 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.787 189020 DEBUG nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.831 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.856 189020 INFO nova.compute.manager [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Took 7.25 seconds to build instance.#033[00m
Feb 18 14:52:45 compute-0 nova_compute[189016]: 2026-02-18 14:52:45.877 189020 DEBUG oslo_concurrency.lockutils [None req-b73395f3-b174-4f96-ac5f-40d8edd262ef 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.315 242262 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.315 242262 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.315 242262 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:46 compute-0 nova_compute[189016]: 2026-02-18 14:52:46.575 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.934 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[fc18a436-fe3f-4cf9-851f-3613a067bf44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.935 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc269c00a-f1 in ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.937 242262 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc269c00a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.938 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[4930a594-e118-4a34-98f6-e26ebd0339a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.941 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb180e1-cb12-4958-90e9-fa08be19700e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.969 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ac20d6-6565-437b-9189-6fbb2cdb692a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.987 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[8eae9c03-5bc9-428b-a9bb-a811a9d63472]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:46 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:46.990 108400 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp25fmsvop/privsep.sock']#033[00m
Feb 18 14:52:47 compute-0 podman[242271]: 2026-02-18 14:52:47.055431536 +0000 UTC m=+0.068143558 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 14:52:47 compute-0 podman[242273]: 2026-02-18 14:52:47.069480784 +0000 UTC m=+0.081908169 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 18 14:52:47 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 18 14:52:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 18 14:52:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:47.720 108400 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 18 14:52:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:47.721 108400 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp25fmsvop/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 18 14:52:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:47.549 242320 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 18 14:52:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:47.554 242320 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 18 14:52:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:47.556 242320 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb 18 14:52:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:47.556 242320 INFO oslo.privsep.daemon [-] privsep daemon running as pid 242320#033[00m
Feb 18 14:52:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:47.725 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[662cd78a-60a9-446d-b36b-ab1e658aa20a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:47 compute-0 nova_compute[189016]: 2026-02-18 14:52:47.749 189020 DEBUG nova.compute.manager [req-a7c781af-e894-4e51-a53d-067e033d42e7 req-efef6b4a-f5e0-4853-8a5d-a5173714aab4 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:52:47 compute-0 nova_compute[189016]: 2026-02-18 14:52:47.750 189020 DEBUG oslo_concurrency.lockutils [req-a7c781af-e894-4e51-a53d-067e033d42e7 req-efef6b4a-f5e0-4853-8a5d-a5173714aab4 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:47 compute-0 nova_compute[189016]: 2026-02-18 14:52:47.750 189020 DEBUG oslo_concurrency.lockutils [req-a7c781af-e894-4e51-a53d-067e033d42e7 req-efef6b4a-f5e0-4853-8a5d-a5173714aab4 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:47 compute-0 nova_compute[189016]: 2026-02-18 14:52:47.750 189020 DEBUG oslo_concurrency.lockutils [req-a7c781af-e894-4e51-a53d-067e033d42e7 req-efef6b4a-f5e0-4853-8a5d-a5173714aab4 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:47 compute-0 nova_compute[189016]: 2026-02-18 14:52:47.750 189020 DEBUG nova.compute.manager [req-a7c781af-e894-4e51-a53d-067e033d42e7 req-efef6b4a-f5e0-4853-8a5d-a5173714aab4 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] No waiting events found dispatching network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 14:52:47 compute-0 nova_compute[189016]: 2026-02-18 14:52:47.752 189020 WARNING nova.compute.manager [req-a7c781af-e894-4e51-a53d-067e033d42e7 req-efef6b4a-f5e0-4853-8a5d-a5173714aab4 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received unexpected event network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe for instance with vm_state active and task_state None.#033[00m
Feb 18 14:52:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:48.314 242320 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:52:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:48.314 242320 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:52:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:48.314 242320 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:52:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:48.932 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f42bb1-ac0e-4322-819c-361c190b0b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:48 compute-0 NetworkManager[57258]: <info>  [1771426368.9578] manager: (tapc269c00a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Feb 18 14:52:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:48.956 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[a04b80e2-a1ce-478a-a19a-1b71672baed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:48 compute-0 systemd-udevd[242351]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 14:52:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:48.991 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[b54369e0-b390-4892-9eda-0a92f2558111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.000 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[959f2abb-68c7-4182-8bbd-aa6e6c407cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 NetworkManager[57258]: <info>  [1771426369.0217] device (tapc269c00a-f0): carrier: link connected
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.025 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f29b86-1f7b-497f-a131-7a8b94656d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.047 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[6408062c-36b5-476b-aee8-8a360142d4a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 28999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242364, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.065 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[abcb336a-2f33-42f2-b79d-c704c6c9eaa4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:4d14'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346647, 'tstamp': 346647}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242370, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.083 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[1e40e0d4-3bc5-484d-b262-7225796f4e3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 28999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 242371, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.111 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[574dfd00-6533-40d6-9816-3715031bc580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 nova_compute[189016]: 2026-02-18 14:52:49.133 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.157 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[d16b9b96-fd7d-418d-8628-2d7f9c15ac86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.160 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.161 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.162 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc269c00a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:52:49 compute-0 nova_compute[189016]: 2026-02-18 14:52:49.165 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:49 compute-0 kernel: tapc269c00a-f0: entered promiscuous mode
Feb 18 14:52:49 compute-0 NetworkManager[57258]: <info>  [1771426369.1678] manager: (tapc269c00a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Feb 18 14:52:49 compute-0 nova_compute[189016]: 2026-02-18 14:52:49.169 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.172 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc269c00a-f0, col_values=(('external_ids', {'iface-id': '7e592dc1-2432-46dc-b338-f9a04aad5932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:52:49 compute-0 ovn_controller[99062]: 2026-02-18T14:52:49Z|00031|binding|INFO|Releasing lport 7e592dc1-2432-46dc-b338-f9a04aad5932 from this chassis (sb_readonly=0)
Feb 18 14:52:49 compute-0 nova_compute[189016]: 2026-02-18 14:52:49.176 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.181 108400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c269c00a-f738-4cb6-ac67-09050c56f9f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c269c00a-f738-4cb6-ac67-09050c56f9f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.183 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[903959b3-bad1-4143-b901-286331d25c71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:52:49 compute-0 nova_compute[189016]: 2026-02-18 14:52:49.185 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.188 108400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: global
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    log         /dev/log local0 debug
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    log-tag     haproxy-metadata-proxy-c269c00a-f738-4cb6-ac67-09050c56f9f2
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    user        root
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    group       root
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    maxconn     1024
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    pidfile     /var/lib/neutron/external/pids/c269c00a-f738-4cb6-ac67-09050c56f9f2.pid.haproxy
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    daemon
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: defaults
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    log global
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    mode http
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    option httplog
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    option dontlognull
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    option http-server-close
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    option forwardfor
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    retries                 3
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    timeout http-request    30s
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    timeout connect         30s
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    timeout client          32s
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    timeout server          32s
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    timeout http-keep-alive 30s
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: listen listener
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    bind 169.254.169.254:80
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    server metadata /var/lib/neutron/metadata_proxy
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]:    http-request add-header X-OVN-Network-ID c269c00a-f738-4cb6-ac67-09050c56f9f2
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 18 14:52:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:52:49.195 108400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'env', 'PROCESS_TAG=haproxy-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c269c00a-f738-4cb6-ac67-09050c56f9f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 18 14:52:49 compute-0 podman[242406]: 2026-02-18 14:52:49.634925089 +0000 UTC m=+0.088862946 container create 14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 18 14:52:49 compute-0 podman[242406]: 2026-02-18 14:52:49.574698844 +0000 UTC m=+0.028636731 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 14:52:49 compute-0 systemd[1]: Started libpod-conmon-14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4.scope.
Feb 18 14:52:49 compute-0 systemd[1]: Started libcrun container.
Feb 18 14:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d958937cf8ccc7844da577acfc8d789a743c32067c168d385000a3c0f5ba519/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 14:52:49 compute-0 podman[242406]: 2026-02-18 14:52:49.771381668 +0000 UTC m=+0.225319555 container init 14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 18 14:52:49 compute-0 podman[242406]: 2026-02-18 14:52:49.780690245 +0000 UTC m=+0.234628102 container start 14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 18 14:52:49 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [NOTICE]   (242425) : New worker (242427) forked
Feb 18 14:52:49 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [NOTICE]   (242425) : Loading success.
Feb 18 14:52:51 compute-0 nova_compute[189016]: 2026-02-18 14:52:51.578 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:52 compute-0 podman[242436]: 2026-02-18 14:52:52.722071064 +0000 UTC m=+0.050966440 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 18 14:52:54 compute-0 nova_compute[189016]: 2026-02-18 14:52:54.143 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:54 compute-0 podman[242455]: 2026-02-18 14:52:54.767154746 +0000 UTC m=+0.092234442 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 14:52:54 compute-0 podman[242456]: 2026-02-18 14:52:54.783310378 +0000 UTC m=+0.107436180 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Feb 18 14:52:56 compute-0 nova_compute[189016]: 2026-02-18 14:52:56.581 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:57 compute-0 ovn_controller[99062]: 2026-02-18T14:52:57Z|00032|binding|INFO|Releasing lport 7e592dc1-2432-46dc-b338-f9a04aad5932 from this chassis (sb_readonly=0)
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5911] manager: (patch-br-int-to-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5917] device (patch-br-int-to-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <warn>  [1771426377.5921] device (patch-br-int-to-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5926] manager: (patch-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5928] device (patch-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <warn>  [1771426377.5929] device (patch-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5935] manager: (patch-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5939] manager: (patch-br-int-to-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5943] device (patch-br-int-to-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 18 14:52:57 compute-0 NetworkManager[57258]: <info>  [1771426377.5945] device (patch-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 18 14:52:57 compute-0 nova_compute[189016]: 2026-02-18 14:52:57.597 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:57 compute-0 ovn_controller[99062]: 2026-02-18T14:52:57Z|00033|binding|INFO|Releasing lport 7e592dc1-2432-46dc-b338-f9a04aad5932 from this chassis (sb_readonly=0)
Feb 18 14:52:57 compute-0 nova_compute[189016]: 2026-02-18 14:52:57.617 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:57 compute-0 nova_compute[189016]: 2026-02-18 14:52:57.996 189020 DEBUG nova.compute.manager [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-changed-15d6e821-445c-43a7-a37c-e5f1566673fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:52:57 compute-0 nova_compute[189016]: 2026-02-18 14:52:57.997 189020 DEBUG nova.compute.manager [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Refreshing instance network info cache due to event network-changed-15d6e821-445c-43a7-a37c-e5f1566673fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 14:52:57 compute-0 nova_compute[189016]: 2026-02-18 14:52:57.998 189020 DEBUG oslo_concurrency.lockutils [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:52:57 compute-0 nova_compute[189016]: 2026-02-18 14:52:57.998 189020 DEBUG oslo_concurrency.lockutils [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:52:57 compute-0 nova_compute[189016]: 2026-02-18 14:52:57.998 189020 DEBUG nova.network.neutron [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Refreshing network info cache for port 15d6e821-445c-43a7-a37c-e5f1566673fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 14:52:59 compute-0 nova_compute[189016]: 2026-02-18 14:52:59.148 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:52:59 compute-0 nova_compute[189016]: 2026-02-18 14:52:59.518 189020 DEBUG nova.network.neutron [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated VIF entry in instance network info cache for port 15d6e821-445c-43a7-a37c-e5f1566673fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 14:52:59 compute-0 nova_compute[189016]: 2026-02-18 14:52:59.519 189020 DEBUG nova.network.neutron [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:52:59 compute-0 nova_compute[189016]: 2026-02-18 14:52:59.536 189020 DEBUG oslo_concurrency.lockutils [req-93890add-bb6a-4cc4-919d-b61249d30cb2 req-c2ee8fb7-789f-4f44-9ba2-aaef08e297e6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:52:59 compute-0 podman[204930]: time="2026-02-18T14:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:52:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:52:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Feb 18 14:53:01 compute-0 openstack_network_exporter[208107]: ERROR   14:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:53:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:53:01 compute-0 openstack_network_exporter[208107]: ERROR   14:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:53:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:53:01 compute-0 nova_compute[189016]: 2026-02-18 14:53:01.584 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:03 compute-0 podman[242495]: 2026-02-18 14:53:03.773169828 +0000 UTC m=+0.078075951 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 14:53:03 compute-0 podman[242494]: 2026-02-18 14:53:03.793063185 +0000 UTC m=+0.097639300 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true)
Feb 18 14:53:04 compute-0 nova_compute[189016]: 2026-02-18 14:53:04.153 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:06 compute-0 nova_compute[189016]: 2026-02-18 14:53:06.586 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:06 compute-0 podman[242535]: 2026-02-18 14:53:06.784591423 +0000 UTC m=+0.104184891 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 14:53:09 compute-0 nova_compute[189016]: 2026-02-18 14:53:09.157 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:11 compute-0 nova_compute[189016]: 2026-02-18 14:53:11.588 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:14 compute-0 nova_compute[189016]: 2026-02-18 14:53:14.161 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:16 compute-0 nova_compute[189016]: 2026-02-18 14:53:16.590 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:17 compute-0 podman[242566]: 2026-02-18 14:53:17.78930803 +0000 UTC m=+0.107014301 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 14:53:17 compute-0 podman[242567]: 2026-02-18 14:53:17.820590718 +0000 UTC m=+0.122466280 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible)
Feb 18 14:53:18 compute-0 ovn_controller[99062]: 2026-02-18T14:53:18Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:55:86 192.168.0.87
Feb 18 14:53:18 compute-0 ovn_controller[99062]: 2026-02-18T14:53:18Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:55:86 192.168.0.87
Feb 18 14:53:19 compute-0 nova_compute[189016]: 2026-02-18 14:53:19.168 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:21 compute-0 nova_compute[189016]: 2026-02-18 14:53:21.593 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:23 compute-0 podman[242611]: 2026-02-18 14:53:23.765317277 +0000 UTC m=+0.093625211 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 18 14:53:24 compute-0 nova_compute[189016]: 2026-02-18 14:53:24.173 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.189 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.190 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.190 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78e0df4470>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.198 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance debb3011-9258-4f04-9eb4-592cc56eb3eb from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 18 14:53:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:25.522 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/debb3011-9258-4f04-9eb4-592cc56eb3eb -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fc9f399571f1151dad02e1ad7f2b10f5a5ac66aa5da5d4c981c78739a1fdba51" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 18 14:53:25 compute-0 podman[242630]: 2026-02-18 14:53:25.752521415 +0000 UTC m=+0.074224475 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 14:53:25 compute-0 podman[242631]: 2026-02-18 14:53:25.770205649 +0000 UTC m=+0.091513470 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 18 14:53:26 compute-0 nova_compute[189016]: 2026-02-18 14:53:26.596 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.619 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Wed, 18 Feb 2026 14:53:25 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-665f7f0f-8f22-44b0-8f88-1d2ab36d7c83 x-openstack-request-id: req-665f7f0f-8f22-44b0-8f88-1d2ab36d7c83 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.619 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "debb3011-9258-4f04-9eb4-592cc56eb3eb", "name": "test_0", "status": "ACTIVE", "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "user_id": "387d978e2b494e88ad13abae2a83321d", "metadata": {}, "hostId": "446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd", "image": {"id": "7cc2a96a-1e6c-474d-b671-0e2626bf4158", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7cc2a96a-1e6c-474d-b671-0e2626bf4158"}]}, "flavor": {"id": "23e98520-0527-4596-8420-5ff1feeb3155", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/23e98520-0527-4596-8420-5ff1feeb3155"}]}, "created": "2026-02-18T14:52:35Z", "updated": "2026-02-18T14:52:45Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.87", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c4:55:86"}, {"version": 4, "addr": "192.168.122.182", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c4:55:86"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/debb3011-9258-4f04-9eb4-592cc56eb3eb"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/debb3011-9258-4f04-9eb4-592cc56eb3eb"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-18T14:52:45.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.619 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/debb3011-9258-4f04-9eb4-592cc56eb3eb used request id req-665f7f0f-8f22-44b0-8f88-1d2ab36d7c83 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.622 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.622 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.622 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.622 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.623 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T14:53:26.622736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.690 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.691 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.691 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.692 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.692 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.692 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.692 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.692 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.693 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.693 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.693 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.693 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.694 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.694 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.694 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.694 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.694 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.694 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.694 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.695 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.695 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.695 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.695 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.696 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.696 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.696 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.697 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T14:53:26.692923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T14:53:26.694686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T14:53:26.697141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.717 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.717 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.717 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.718 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.719 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.719 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T14:53:26.718545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.719 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.719 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.720 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.720 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.720 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.720 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.720 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T14:53:26.720139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.720 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.721 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.721 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.721 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.721 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.721 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.721 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T14:53:26.721615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.725 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for debb3011-9258-4f04-9eb4-592cc56eb3eb / tap15d6e821-44 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.725 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.726 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.726 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.726 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.726 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.726 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.726 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T14:53:26.726538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.746 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 31990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.747 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.747 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.747 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.747 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.747 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.747 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.748 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T14:53:26.747739) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.748 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2516043808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.748 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.748 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.749 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.750 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T14:53:26.749404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.750 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.750 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.750 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.750 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.750 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.750 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.751 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.751 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.751 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.751 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.751 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 223 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.751 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.751 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T14:53:26.750450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T14:53:26.751354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.752 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.752 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.752 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.752 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T14:53:26.753095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.753 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.754 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T14:53:26.754123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T14:53:26.754800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.755 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 1582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T14:53:26.755864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.756 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.757 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.757 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.757 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.757 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T14:53:26.756896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.757 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.757 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.757 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.758 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.758 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T14:53:26.757916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.758 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.758 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.758 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.759 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.759 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.759 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-18T14:53:26.759144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.759 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.759 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.760 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.760 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.760 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.761 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.761 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.761 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T14:53:26.761070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.761 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.762 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.762 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.762 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.762 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.762 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T14:53:26.762271) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.762 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 49.54296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T14:53:26.763467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.763 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.764 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.764 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.764 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.764 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.764 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.764 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.765 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.765 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.765 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T14:53:26.765132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.765 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.765 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.766 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.766 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.766 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.766 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.766 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T14:53:26.766251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.766 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.766 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.767 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.767 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.767 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.767 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.767 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.768 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.768 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T14:53:26.767382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.768 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.768 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.768 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.768 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.769 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.769 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.769 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.769 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-18T14:53:26.769121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.769 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.770 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.771 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.772 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.772 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:53:26.772 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:53:27 compute-0 ovn_controller[99062]: 2026-02-18T14:53:27Z|00034|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Feb 18 14:53:29 compute-0 nova_compute[189016]: 2026-02-18 14:53:29.179 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:29 compute-0 podman[204930]: time="2026-02-18T14:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:53:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:53:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4361 "" "Go-http-client/1.1"
Feb 18 14:53:31 compute-0 openstack_network_exporter[208107]: ERROR   14:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:53:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:53:31 compute-0 openstack_network_exporter[208107]: ERROR   14:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:53:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:53:31 compute-0 nova_compute[189016]: 2026-02-18 14:53:31.599 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:34 compute-0 nova_compute[189016]: 2026-02-18 14:53:34.185 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:34 compute-0 podman[242671]: 2026-02-18 14:53:34.72162914 +0000 UTC m=+0.050968424 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:53:34 compute-0 podman[242670]: 2026-02-18 14:53:34.765918468 +0000 UTC m=+0.096813110 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, managed_by=edpm_ansible, org.label-schema.build-date=20260216)
Feb 18 14:53:36 compute-0 nova_compute[189016]: 2026-02-18 14:53:36.602 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:37 compute-0 podman[242714]: 2026-02-18 14:53:37.027251312 +0000 UTC m=+0.098727467 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 18 14:53:37 compute-0 nova_compute[189016]: 2026-02-18 14:53:37.186 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:37 compute-0 nova_compute[189016]: 2026-02-18 14:53:37.187 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:53:37 compute-0 nova_compute[189016]: 2026-02-18 14:53:37.188 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:53:37 compute-0 nova_compute[189016]: 2026-02-18 14:53:37.428 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:53:37 compute-0 nova_compute[189016]: 2026-02-18 14:53:37.429 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:53:37 compute-0 nova_compute[189016]: 2026-02-18 14:53:37.429 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 14:53:37 compute-0 nova_compute[189016]: 2026-02-18 14:53:37.430 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:53:38 compute-0 nova_compute[189016]: 2026-02-18 14:53:38.814 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:53:38 compute-0 nova_compute[189016]: 2026-02-18 14:53:38.832 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:53:38 compute-0 nova_compute[189016]: 2026-02-18 14:53:38.833 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 14:53:38 compute-0 nova_compute[189016]: 2026-02-18 14:53:38.834 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:39 compute-0 nova_compute[189016]: 2026-02-18 14:53:39.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:39 compute-0 nova_compute[189016]: 2026-02-18 14:53:39.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:39 compute-0 nova_compute[189016]: 2026-02-18 14:53:39.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:53:39 compute-0 nova_compute[189016]: 2026-02-18 14:53:39.191 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:40 compute-0 nova_compute[189016]: 2026-02-18 14:53:40.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:40 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:40.619 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 14:53:40 compute-0 nova_compute[189016]: 2026-02-18 14:53:40.622 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:40 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:40.623 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 14:53:41 compute-0 nova_compute[189016]: 2026-02-18 14:53:41.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:41 compute-0 nova_compute[189016]: 2026-02-18 14:53:41.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:41 compute-0 nova_compute[189016]: 2026-02-18 14:53:41.053 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:41.421 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:41.422 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:41.423 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:41 compute-0 nova_compute[189016]: 2026-02-18 14:53:41.604 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.079 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.081 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.081 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.189 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.257 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.259 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.317 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.319 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.371 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.372 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.421 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.747 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.748 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5233MB free_disk=72.2486343383789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.749 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.749 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.834 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.834 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.835 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.883 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.903 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.926 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:53:42 compute-0 nova_compute[189016]: 2026-02-18 14:53:42.926 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:44 compute-0 nova_compute[189016]: 2026-02-18 14:53:44.197 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:44 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:44.626 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.177 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.178 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.194 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.264 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.264 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.273 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.274 189020 INFO nova.compute.claims [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.384 189020 DEBUG nova.compute.provider_tree [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.399 189020 DEBUG nova.scheduler.client.report [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.426 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.427 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.468 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.469 189020 DEBUG nova.network.neutron [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.498 189020 INFO nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.547 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.647 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.649 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.650 189020 INFO nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Creating image(s)#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.652 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.653 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.654 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.676 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.735 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.736 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "8e446fd4a49ba04578b223406ce2c408026401e6" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.737 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.751 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.798 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.799 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.833 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.834 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.835 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.887 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.888 189020 DEBUG nova.virt.disk.api [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Checking if we can resize image /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.888 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.943 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.944 189020 DEBUG nova.virt.disk.api [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Cannot resize image /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.945 189020 DEBUG nova.objects.instance [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a9ee96c-8146-46a1-a098-5d021fb5e779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.959 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.959 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.960 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:45 compute-0 nova_compute[189016]: 2026-02-18 14:53:45.971 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.034 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.035 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.036 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.055 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.115 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.116 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.150 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.152 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.152 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.207 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.209 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.209 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Ensure instance console log exists: /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.210 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.211 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.211 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:46 compute-0 nova_compute[189016]: 2026-02-18 14:53:46.605 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.277 189020 DEBUG nova.network.neutron [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Successfully updated port: 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.292 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.293 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquired lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.293 189020 DEBUG nova.network.neutron [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.359 189020 DEBUG nova.compute.manager [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received event network-changed-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.359 189020 DEBUG nova.compute.manager [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Refreshing instance network info cache due to event network-changed-578e1a09-d9b1-45b7-905b-69ab1a58cbe0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.360 189020 DEBUG oslo_concurrency.lockutils [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:53:48 compute-0 nova_compute[189016]: 2026-02-18 14:53:48.413 189020 DEBUG nova.network.neutron [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 14:53:48 compute-0 podman[242781]: 2026-02-18 14:53:48.764180694 +0000 UTC m=+0.075683230 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 14:53:48 compute-0 podman[242782]: 2026-02-18 14:53:48.776057346 +0000 UTC m=+0.087848649 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1770267347, version=9.7, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64)
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.202 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.730 189020 DEBUG nova.network.neutron [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.753 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Releasing lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.753 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Instance network_info: |[{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.754 189020 DEBUG oslo_concurrency.lockutils [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.754 189020 DEBUG nova.network.neutron [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Refreshing network info cache for port 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.757 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Start _get_guest_xml network_info=[{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}], 'ephemerals': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 1, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.765 189020 WARNING nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.775 189020 DEBUG nova.virt.libvirt.host [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.776 189020 DEBUG nova.virt.libvirt.host [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.781 189020 DEBUG nova.virt.libvirt.host [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.782 189020 DEBUG nova.virt.libvirt.host [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.782 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.782 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T14:51:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='23e98520-0527-4596-8420-5ff1feeb3155',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.783 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.784 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.784 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.784 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.785 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.785 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.785 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.786 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.786 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.787 189020 DEBUG nova.virt.hardware [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.790 189020 DEBUG nova.virt.libvirt.vif [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T14:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc',id=2,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-3rhtbt0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T14:53:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzIzNjMwODUwNTYwMTA2NzA4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Feb 18 14:53:49 compute-0 nova_compute[189016]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzIzNjMwODUwNTYwMTA2NzA4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=9a9ee96c-8146-46a1-a098-5d021fb5e779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.791 189020 DEBUG nova.network.os_vif_util [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.792 189020 DEBUG nova.network.os_vif_util [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:c3,bridge_name='br-int',has_traffic_filtering=True,id=578e1a09-d9b1-45b7-905b-69ab1a58cbe0,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap578e1a09-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.793 189020 DEBUG nova.objects.instance [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a9ee96c-8146-46a1-a098-5d021fb5e779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.812 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] End _get_guest_xml xml=<domain type="kvm">
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <uuid>9a9ee96c-8146-46a1-a098-5d021fb5e779</uuid>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <name>instance-00000002</name>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <memory>524288</memory>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <metadata>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <nova:name>vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc</nova:name>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 14:53:49</nova:creationTime>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <nova:flavor name="m1.small">
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:memory>512</nova:memory>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:ephemeral>1</nova:ephemeral>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:user uuid="387d978e2b494e88ad13abae2a83321d">admin</nova:user>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:project uuid="71c6c5d63b07447388ace322f081ffc3">admin</nova:project>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="7cc2a96a-1e6c-474d-b671-0e2626bf4158"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        <nova:port uuid="578e1a09-d9b1-45b7-905b-69ab1a58cbe0">
Feb 18 14:53:49 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="192.168.0.167" ipVersion="4"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  </metadata>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <system>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <entry name="serial">9a9ee96c-8146-46a1-a098-5d021fb5e779</entry>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <entry name="uuid">9a9ee96c-8146-46a1-a098-5d021fb5e779</entry>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </system>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <os>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  </os>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <features>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <apic/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  </features>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  </clock>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  </cpu>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  <devices>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <target dev="vdb" bus="virtio"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.config"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:65:30:c3"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <target dev="tap578e1a09-d9"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </interface>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/console.log" append="off"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </serial>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <video>
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </video>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </rng>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 14:53:49 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 14:53:49 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 14:53:49 compute-0 nova_compute[189016]:  </devices>
Feb 18 14:53:49 compute-0 nova_compute[189016]: </domain>
Feb 18 14:53:49 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.814 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Preparing to wait for external event network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.814 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.814 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.815 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.815 189020 DEBUG nova.virt.libvirt.vif [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T14:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc',id=2,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-3rhtbt0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T14:53:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzIzNjMwODUwNTYwMTA2NzA4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Feb 18 14:53:49 compute-0 nova_compute[189016]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzIzNjMwODUwNTYwMTA2NzA4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=9a9ee96c-8146-46a1-a098-5d021fb5e779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.816 189020 DEBUG nova.network.os_vif_util [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.817 189020 DEBUG nova.network.os_vif_util [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:c3,bridge_name='br-int',has_traffic_filtering=True,id=578e1a09-d9b1-45b7-905b-69ab1a58cbe0,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap578e1a09-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.817 189020 DEBUG os_vif [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:c3,bridge_name='br-int',has_traffic_filtering=True,id=578e1a09-d9b1-45b7-905b-69ab1a58cbe0,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap578e1a09-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.818 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.818 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.819 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.825 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.826 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap578e1a09-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.827 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap578e1a09-d9, col_values=(('external_ids', {'iface-id': '578e1a09-d9b1-45b7-905b-69ab1a58cbe0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:30:c3', 'vm-uuid': '9a9ee96c-8146-46a1-a098-5d021fb5e779'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.829 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:49 compute-0 NetworkManager[57258]: <info>  [1771426429.8306] manager: (tap578e1a09-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.832 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.836 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.837 189020 INFO os_vif [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:30:c3,bridge_name='br-int',has_traffic_filtering=True,id=578e1a09-d9b1-45b7-905b-69ab1a58cbe0,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap578e1a09-d9')#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.894 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.895 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.895 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.896 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No VIF found with MAC fa:16:3e:65:30:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 14:53:49 compute-0 nova_compute[189016]: 2026-02-18 14:53:49.896 189020 INFO nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Using config drive#033[00m
Feb 18 14:53:50 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 14:53:49.790 189020 DEBUG nova.virt.libvirt.vif [None req-26893278-a6 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 14:53:50 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 14:53:49.815 189020 DEBUG nova.virt.libvirt.vif [None req-26893278-a6 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 14:53:50 compute-0 nova_compute[189016]: 2026-02-18 14:53:50.710 189020 INFO nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Creating config drive at /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.config#033[00m
Feb 18 14:53:50 compute-0 nova_compute[189016]: 2026-02-18 14:53:50.716 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprueq4y58 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:53:50 compute-0 nova_compute[189016]: 2026-02-18 14:53:50.841 189020 DEBUG oslo_concurrency.processutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprueq4y58" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:53:50 compute-0 kernel: tap578e1a09-d9: entered promiscuous mode
Feb 18 14:53:50 compute-0 NetworkManager[57258]: <info>  [1771426430.8980] manager: (tap578e1a09-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Feb 18 14:53:50 compute-0 ovn_controller[99062]: 2026-02-18T14:53:50Z|00035|binding|INFO|Claiming lport 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 for this chassis.
Feb 18 14:53:50 compute-0 ovn_controller[99062]: 2026-02-18T14:53:50Z|00036|binding|INFO|578e1a09-d9b1-45b7-905b-69ab1a58cbe0: Claiming fa:16:3e:65:30:c3 192.168.0.167
Feb 18 14:53:50 compute-0 nova_compute[189016]: 2026-02-18 14:53:50.902 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:50 compute-0 ovn_controller[99062]: 2026-02-18T14:53:50Z|00037|binding|INFO|Setting lport 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 ovn-installed in OVS
Feb 18 14:53:50 compute-0 ovn_controller[99062]: 2026-02-18T14:53:50Z|00038|binding|INFO|Setting lport 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 up in Southbound
Feb 18 14:53:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:50.910 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:30:c3 192.168.0.167'], port_security=['fa:16:3e:65:30:c3 192.168.0.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-g63ccmz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-port-ngab67dd2bgj', 'neutron:cidrs': '192.168.0.167/24', 'neutron:device_id': '9a9ee96c-8146-46a1-a098-5d021fb5e779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-g63ccmz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-port-ngab67dd2bgj', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=578e1a09-d9b1-45b7-905b-69ab1a58cbe0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 14:53:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:50.914 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 bound to our chassis#033[00m
Feb 18 14:53:50 compute-0 nova_compute[189016]: 2026-02-18 14:53:50.917 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:50.919 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c269c00a-f738-4cb6-ac67-09050c56f9f2#033[00m
Feb 18 14:53:50 compute-0 systemd-machined[158361]: New machine qemu-2-instance-00000002.
Feb 18 14:53:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:50.939 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[460903b7-30c3-4c59-8305-fd17dce9813a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:53:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:50.964 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[91af6824-5776-48bb-b831-490f9408594d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:53:50 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb 18 14:53:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:50.972 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[f162b208-e1c0-41a4-94e8-2a053ba68b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:53:50 compute-0 systemd-udevd[242856]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 14:53:50 compute-0 NetworkManager[57258]: <info>  [1771426430.9938] device (tap578e1a09-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 14:53:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:50.995 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[18838097-92c1-4e19-a2b4-ab52759fc215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:53:50 compute-0 NetworkManager[57258]: <info>  [1771426430.9981] device (tap578e1a09-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 14:53:51 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:51.009 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe831d2-11a9-487e-b9b1-5dd8e1cb16cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 28999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242860, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:53:51 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:51.022 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b51cd3-0ea6-4b93-aebf-de56ad774690]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346658, 'tstamp': 346658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242861, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346660, 'tstamp': 346660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242861, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:53:51 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:51.025 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.027 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.029 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:51 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:51.029 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc269c00a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:53:51 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:51.030 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:53:51 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:51.030 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc269c00a-f0, col_values=(('external_ids', {'iface-id': '7e592dc1-2432-46dc-b338-f9a04aad5932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:53:51 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:53:51.031 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.207 189020 DEBUG nova.compute.manager [req-e2d7f7b9-4210-45be-aa64-df28738038ec req-27bc70b4-4a2a-452e-85ca-9ea78de80a17 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received event network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.207 189020 DEBUG oslo_concurrency.lockutils [req-e2d7f7b9-4210-45be-aa64-df28738038ec req-27bc70b4-4a2a-452e-85ca-9ea78de80a17 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.208 189020 DEBUG oslo_concurrency.lockutils [req-e2d7f7b9-4210-45be-aa64-df28738038ec req-27bc70b4-4a2a-452e-85ca-9ea78de80a17 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.208 189020 DEBUG oslo_concurrency.lockutils [req-e2d7f7b9-4210-45be-aa64-df28738038ec req-27bc70b4-4a2a-452e-85ca-9ea78de80a17 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.208 189020 DEBUG nova.compute.manager [req-e2d7f7b9-4210-45be-aa64-df28738038ec req-27bc70b4-4a2a-452e-85ca-9ea78de80a17 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Processing event network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.458 189020 DEBUG nova.network.neutron [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updated VIF entry in instance network info cache for port 578e1a09-d9b1-45b7-905b-69ab1a58cbe0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.458 189020 DEBUG nova.network.neutron [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.476 189020 DEBUG oslo_concurrency.lockutils [req-cf5dedec-8cad-4734-85a9-0bd0d34b3569 req-1c3a65c7-e78b-4176-8d9d-9bf893111742 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.606 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.608 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.610 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426431.6050363, 9a9ee96c-8146-46a1-a098-5d021fb5e779 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.610 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] VM Started (Lifecycle Event)#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.613 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.619 189020 INFO nova.virt.libvirt.driver [-] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Instance spawned successfully.#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.619 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.635 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.645 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.650 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.651 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.651 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.652 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.653 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.653 189020 DEBUG nova.virt.libvirt.driver [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.678 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.678 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426431.6071646, 9a9ee96c-8146-46a1-a098-5d021fb5e779 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.679 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] VM Paused (Lifecycle Event)#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.703 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.708 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426431.6107426, 9a9ee96c-8146-46a1-a098-5d021fb5e779 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.708 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] VM Resumed (Lifecycle Event)#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.716 189020 INFO nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Took 6.07 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.717 189020 DEBUG nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.733 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.739 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.779 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.791 189020 INFO nova.compute.manager [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Took 6.54 seconds to build instance.#033[00m
Feb 18 14:53:51 compute-0 nova_compute[189016]: 2026-02-18 14:53:51.808 189020 DEBUG oslo_concurrency.lockutils [None req-26893278-a688-48ee-b2d0-0032a39d5769 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:53 compute-0 nova_compute[189016]: 2026-02-18 14:53:53.280 189020 DEBUG nova.compute.manager [req-7c0180c0-72e9-4a53-8e7a-39195e39aa7a req-4826373b-c561-4c1b-a958-52b341c0ae77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received event network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:53:53 compute-0 nova_compute[189016]: 2026-02-18 14:53:53.281 189020 DEBUG oslo_concurrency.lockutils [req-7c0180c0-72e9-4a53-8e7a-39195e39aa7a req-4826373b-c561-4c1b-a958-52b341c0ae77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:53:53 compute-0 nova_compute[189016]: 2026-02-18 14:53:53.281 189020 DEBUG oslo_concurrency.lockutils [req-7c0180c0-72e9-4a53-8e7a-39195e39aa7a req-4826373b-c561-4c1b-a958-52b341c0ae77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:53:53 compute-0 nova_compute[189016]: 2026-02-18 14:53:53.281 189020 DEBUG oslo_concurrency.lockutils [req-7c0180c0-72e9-4a53-8e7a-39195e39aa7a req-4826373b-c561-4c1b-a958-52b341c0ae77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:53:53 compute-0 nova_compute[189016]: 2026-02-18 14:53:53.282 189020 DEBUG nova.compute.manager [req-7c0180c0-72e9-4a53-8e7a-39195e39aa7a req-4826373b-c561-4c1b-a958-52b341c0ae77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] No waiting events found dispatching network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 14:53:53 compute-0 nova_compute[189016]: 2026-02-18 14:53:53.282 189020 WARNING nova.compute.manager [req-7c0180c0-72e9-4a53-8e7a-39195e39aa7a req-4826373b-c561-4c1b-a958-52b341c0ae77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received unexpected event network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 for instance with vm_state active and task_state None.#033[00m
Feb 18 14:53:54 compute-0 podman[242875]: 2026-02-18 14:53:54.744278832 +0000 UTC m=+0.067911620 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 18 14:53:54 compute-0 nova_compute[189016]: 2026-02-18 14:53:54.830 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:56 compute-0 nova_compute[189016]: 2026-02-18 14:53:56.614 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:53:56 compute-0 podman[242896]: 2026-02-18 14:53:56.761719472 +0000 UTC m=+0.069864267 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Feb 18 14:53:56 compute-0 podman[242897]: 2026-02-18 14:53:56.771038601 +0000 UTC m=+0.079572205 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public)
Feb 18 14:53:59 compute-0 podman[204930]: time="2026-02-18T14:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:53:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:53:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 18 14:53:59 compute-0 nova_compute[189016]: 2026-02-18 14:53:59.832 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:01 compute-0 openstack_network_exporter[208107]: ERROR   14:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:54:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:54:01 compute-0 nova_compute[189016]: 2026-02-18 14:54:01.636 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:01 compute-0 openstack_network_exporter[208107]: ERROR   14:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:54:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:54:04 compute-0 nova_compute[189016]: 2026-02-18 14:54:04.836 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:05 compute-0 podman[242933]: 2026-02-18 14:54:05.748372673 +0000 UTC m=+0.070488623 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 18 14:54:05 compute-0 podman[242934]: 2026-02-18 14:54:05.776895104 +0000 UTC m=+0.087236205 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:54:06 compute-0 nova_compute[189016]: 2026-02-18 14:54:06.632 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:07 compute-0 podman[242973]: 2026-02-18 14:54:07.823958442 +0000 UTC m=+0.150162251 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 18 14:54:09 compute-0 nova_compute[189016]: 2026-02-18 14:54:09.839 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:11 compute-0 nova_compute[189016]: 2026-02-18 14:54:11.635 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:14 compute-0 nova_compute[189016]: 2026-02-18 14:54:14.845 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:16 compute-0 nova_compute[189016]: 2026-02-18 14:54:16.638 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:19 compute-0 podman[243001]: 2026-02-18 14:54:19.771882462 +0000 UTC m=+0.090673583 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1770267347)
Feb 18 14:54:19 compute-0 podman[243000]: 2026-02-18 14:54:19.776782147 +0000 UTC m=+0.098462231 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:54:19 compute-0 nova_compute[189016]: 2026-02-18 14:54:19.849 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:20 compute-0 ovn_controller[99062]: 2026-02-18T14:54:20Z|00039|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Feb 18 14:54:21 compute-0 nova_compute[189016]: 2026-02-18 14:54:21.641 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:23 compute-0 ovn_controller[99062]: 2026-02-18T14:54:23Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:30:c3 192.168.0.167
Feb 18 14:54:23 compute-0 ovn_controller[99062]: 2026-02-18T14:54:23Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:30:c3 192.168.0.167
Feb 18 14:54:24 compute-0 nova_compute[189016]: 2026-02-18 14:54:24.852 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:25 compute-0 podman[243057]: 2026-02-18 14:54:25.743514062 +0000 UTC m=+0.065175659 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 18 14:54:26 compute-0 nova_compute[189016]: 2026-02-18 14:54:26.643 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:27 compute-0 podman[243077]: 2026-02-18 14:54:27.771629425 +0000 UTC m=+0.087499131 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 18 14:54:27 compute-0 podman[243076]: 2026-02-18 14:54:27.790626082 +0000 UTC m=+0.108236062 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 18 14:54:29 compute-0 podman[204930]: time="2026-02-18T14:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:54:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:54:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 18 14:54:29 compute-0 nova_compute[189016]: 2026-02-18 14:54:29.854 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:31 compute-0 openstack_network_exporter[208107]: ERROR   14:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:54:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:54:31 compute-0 openstack_network_exporter[208107]: ERROR   14:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:54:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:54:31 compute-0 nova_compute[189016]: 2026-02-18 14:54:31.646 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:32 compute-0 nova_compute[189016]: 2026-02-18 14:54:32.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:32 compute-0 nova_compute[189016]: 2026-02-18 14:54:32.053 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 18 14:54:34 compute-0 nova_compute[189016]: 2026-02-18 14:54:34.858 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.073 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.074 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.074 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.511 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.512 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.513 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.513 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:54:36 compute-0 nova_compute[189016]: 2026-02-18 14:54:36.648 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:36 compute-0 podman[243117]: 2026-02-18 14:54:36.760472839 +0000 UTC m=+0.079660781 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:54:36 compute-0 podman[243116]: 2026-02-18 14:54:36.772123357 +0000 UTC m=+0.100097584 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 18 14:54:38 compute-0 nova_compute[189016]: 2026-02-18 14:54:38.652 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:54:38 compute-0 nova_compute[189016]: 2026-02-18 14:54:38.683 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:54:38 compute-0 nova_compute[189016]: 2026-02-18 14:54:38.684 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 14:54:38 compute-0 nova_compute[189016]: 2026-02-18 14:54:38.685 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:38 compute-0 nova_compute[189016]: 2026-02-18 14:54:38.685 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 18 14:54:38 compute-0 nova_compute[189016]: 2026-02-18 14:54:38.722 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 18 14:54:38 compute-0 nova_compute[189016]: 2026-02-18 14:54:38.722 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:38 compute-0 podman[243160]: 2026-02-18 14:54:38.813762506 +0000 UTC m=+0.133524479 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Feb 18 14:54:39 compute-0 nova_compute[189016]: 2026-02-18 14:54:39.100 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:39 compute-0 nova_compute[189016]: 2026-02-18 14:54:39.101 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:39 compute-0 nova_compute[189016]: 2026-02-18 14:54:39.121 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:39 compute-0 nova_compute[189016]: 2026-02-18 14:54:39.122 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:54:39 compute-0 nova_compute[189016]: 2026-02-18 14:54:39.861 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:40 compute-0 nova_compute[189016]: 2026-02-18 14:54:40.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:54:41.423 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:54:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:54:41.425 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:54:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:54:41.426 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:54:41 compute-0 nova_compute[189016]: 2026-02-18 14:54:41.651 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:42 compute-0 nova_compute[189016]: 2026-02-18 14:54:42.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:42 compute-0 nova_compute[189016]: 2026-02-18 14:54:42.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:43 compute-0 nova_compute[189016]: 2026-02-18 14:54:43.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:43 compute-0 nova_compute[189016]: 2026-02-18 14:54:43.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.141 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.143 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.143 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.144 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.327 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.402 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.404 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.465 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.467 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.537 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.538 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.600 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.610 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.704 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.705 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.759 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.761 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.822 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.823 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.864 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:44 compute-0 nova_compute[189016]: 2026-02-18 14:54:44.885 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.237 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.239 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5019MB free_disk=72.2264518737793GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.239 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.240 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.448 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.448 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.449 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.449 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.591 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.607 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.632 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:54:45 compute-0 nova_compute[189016]: 2026-02-18 14:54:45.633 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:54:46 compute-0 nova_compute[189016]: 2026-02-18 14:54:46.653 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:49 compute-0 nova_compute[189016]: 2026-02-18 14:54:49.867 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.171 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.194 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Triggering sync for uuid debb3011-9258-4f04-9eb4-592cc56eb3eb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.194 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Triggering sync for uuid 9a9ee96c-8146-46a1-a098-5d021fb5e779 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.194 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.195 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.195 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.195 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.268 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:54:50 compute-0 nova_compute[189016]: 2026-02-18 14:54:50.272 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:54:50 compute-0 podman[243213]: 2026-02-18 14:54:50.7526414 +0000 UTC m=+0.074336044 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter)
Feb 18 14:54:50 compute-0 podman[243212]: 2026-02-18 14:54:50.788777355 +0000 UTC m=+0.104899617 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:54:51 compute-0 nova_compute[189016]: 2026-02-18 14:54:51.656 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:54 compute-0 nova_compute[189016]: 2026-02-18 14:54:54.870 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:56 compute-0 nova_compute[189016]: 2026-02-18 14:54:56.659 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:54:56 compute-0 podman[243255]: 2026-02-18 14:54:56.751425761 +0000 UTC m=+0.071720777 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 14:54:58 compute-0 podman[243272]: 2026-02-18 14:54:58.735783702 +0000 UTC m=+0.061200668 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true)
Feb 18 14:54:58 compute-0 podman[243273]: 2026-02-18 14:54:58.762299681 +0000 UTC m=+0.086373632 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 18 14:54:59 compute-0 podman[204930]: time="2026-02-18T14:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:54:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:54:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Feb 18 14:54:59 compute-0 nova_compute[189016]: 2026-02-18 14:54:59.872 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:01 compute-0 openstack_network_exporter[208107]: ERROR   14:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:55:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:55:01 compute-0 openstack_network_exporter[208107]: ERROR   14:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:55:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:55:01 compute-0 nova_compute[189016]: 2026-02-18 14:55:01.662 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:04 compute-0 nova_compute[189016]: 2026-02-18 14:55:04.875 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:06 compute-0 nova_compute[189016]: 2026-02-18 14:55:06.665 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:07 compute-0 podman[243312]: 2026-02-18 14:55:07.250547748 +0000 UTC m=+0.075332570 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 18 14:55:07 compute-0 podman[243313]: 2026-02-18 14:55:07.283571343 +0000 UTC m=+0.096534732 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 14:55:09 compute-0 podman[243355]: 2026-02-18 14:55:09.81266988 +0000 UTC m=+0.131368374 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 18 14:55:09 compute-0 nova_compute[189016]: 2026-02-18 14:55:09.878 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:11 compute-0 nova_compute[189016]: 2026-02-18 14:55:11.669 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:14 compute-0 nova_compute[189016]: 2026-02-18 14:55:14.880 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:16 compute-0 nova_compute[189016]: 2026-02-18 14:55:16.673 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:19 compute-0 nova_compute[189016]: 2026-02-18 14:55:19.884 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:21 compute-0 nova_compute[189016]: 2026-02-18 14:55:21.675 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:21 compute-0 podman[243382]: 2026-02-18 14:55:21.747927638 +0000 UTC m=+0.060149621 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 14:55:21 compute-0 podman[243383]: 2026-02-18 14:55:21.766542984 +0000 UTC m=+0.077749111 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.7)
Feb 18 14:55:24 compute-0 nova_compute[189016]: 2026-02-18 14:55:24.887 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.190 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.191 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.191 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.191 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.200 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.203 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/9a9ee96c-8146-46a1-a098-5d021fb5e779 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fc9f399571f1151dad02e1ad7f2b10f5a5ac66aa5da5d4c981c78739a1fdba51" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.742 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 18 Feb 2026 14:55:25 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e136f894-6276-490a-b608-808e9a6950fb x-openstack-request-id: req-e136f894-6276-490a-b608-808e9a6950fb _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.743 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "9a9ee96c-8146-46a1-a098-5d021fb5e779", "name": "vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc", "status": "ACTIVE", "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "user_id": "387d978e2b494e88ad13abae2a83321d", "metadata": {"metering.server_group": "449d1667-0173-4809-b0e3-b50e27381afa"}, "hostId": "446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd", "image": {"id": "7cc2a96a-1e6c-474d-b671-0e2626bf4158", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7cc2a96a-1e6c-474d-b671-0e2626bf4158"}]}, "flavor": {"id": "23e98520-0527-4596-8420-5ff1feeb3155", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/23e98520-0527-4596-8420-5ff1feeb3155"}]}, "created": "2026-02-18T14:53:43Z", "updated": "2026-02-18T14:53:51Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.167", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:65:30:c3"}, {"version": 4, "addr": "192.168.122.197", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:65:30:c3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/9a9ee96c-8146-46a1-a098-5d021fb5e779"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/9a9ee96c-8146-46a1-a098-5d021fb5e779"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-18T14:53:51.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.743 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/9a9ee96c-8146-46a1-a098-5d021fb5e779 used request id req-e136f894-6276-490a-b608-808e9a6950fb request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.744 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9a9ee96c-8146-46a1-a098-5d021fb5e779', 'name': 'vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.748 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.748 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.749 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.749 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.749 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T14:55:25.749475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.804 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 817918800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.805 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 168311221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.805 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 348237395 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.872 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.873 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.873 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.875 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.875 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.875 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.875 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.876 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.876 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.876 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.876 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T14:55:25.876174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.877 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.878 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.878 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.879 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.879 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.880 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.880 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.880 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.880 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.880 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.881 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.881 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T14:55:25.881149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.882 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.882 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.883 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.883 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.884 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.885 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.885 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.885 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.885 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.886 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.886 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T14:55:25.886197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.912 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.913 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.913 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.939 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.940 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.940 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.941 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.941 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.942 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.942 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.942 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.942 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.943 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.944 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.944 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.945 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T14:55:25.942719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.945 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.945 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.946 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.947 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.947 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.947 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.947 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.947 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.948 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 41738240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T14:55:25.947614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.948 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.948 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.949 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.949 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.950 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.950 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.951 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.951 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.951 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.951 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.951 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T14:55:25.951709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.957 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 9a9ee96c-8146-46a1-a098-5d021fb5e779 / tap578e1a09-d9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.957 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.962 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.963 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.963 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.963 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.963 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.963 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.964 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T14:55:25.963916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:25.986 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/cpu volume: 47370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.002 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 33340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.003 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.003 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.003 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.003 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.003 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.003 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.003 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 1997391474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T14:55:26.003755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.004 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 22293021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.004 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.004 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.005 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.005 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.005 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.006 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.006 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.006 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.006 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.006 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.006 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.007 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.007 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.007 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.008 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.008 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T14:55:26.006530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.008 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.008 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T14:55:26.008419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.009 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.009 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.010 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.010 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.010 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.010 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.010 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.010 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.011 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T14:55:26.010897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.011 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.011 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.012 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.012 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.012 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.013 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.013 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.013 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.013 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.013 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.014 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T14:55:26.013670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.014 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.015 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.016 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.016 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.016 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.016 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.017 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.017 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T14:55:26.017313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.018 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.018 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.018 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.019 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.019 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.019 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.019 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.019 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T14:55:26.019266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.020 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.021 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T14:55:26.020821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.021 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.021 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.022 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.022 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.022 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.022 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.022 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.023 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.023 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.023 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.023 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.023 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T14:55:26.022312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.023 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T14:55:26.023629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.024 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.024 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.024 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.024 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.024 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.025 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.025 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.025 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-18T14:55:26.025109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.025 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc>]
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.025 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.026 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.026 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.026 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.026 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.026 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.026 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.027 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.027 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.027 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T14:55:26.026409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.027 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.028 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.028 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.028 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/memory.usage volume: 49.1015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.028 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T14:55:26.028111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.029 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.030 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T14:55:26.029737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.030 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.030 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.031 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.031 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.032 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.032 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.032 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.032 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.032 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.032 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.032 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.033 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.033 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.033 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.033 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.034 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.034 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T14:55:26.032570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.034 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.034 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.034 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.035 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.035 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.035 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.035 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.035 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.036 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T14:55:26.034304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.036 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.036 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T14:55:26.035932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.037 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-18T14:55:26.037744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.038 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc>]
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.038 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.039 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:55:26.040 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:55:26 compute-0 nova_compute[189016]: 2026-02-18 14:55:26.678 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:27 compute-0 podman[243424]: 2026-02-18 14:55:27.776447395 +0000 UTC m=+0.092298543 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 18 14:55:29 compute-0 podman[204930]: time="2026-02-18T14:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:55:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:55:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Feb 18 14:55:29 compute-0 podman[243444]: 2026-02-18 14:55:29.776486119 +0000 UTC m=+0.085585022 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=kepler, version=9.4, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git)
Feb 18 14:55:29 compute-0 podman[243443]: 2026-02-18 14:55:29.782598205 +0000 UTC m=+0.101762585 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Feb 18 14:55:29 compute-0 nova_compute[189016]: 2026-02-18 14:55:29.889 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:31 compute-0 openstack_network_exporter[208107]: ERROR   14:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:55:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:55:31 compute-0 openstack_network_exporter[208107]: ERROR   14:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:55:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:55:31 compute-0 nova_compute[189016]: 2026-02-18 14:55:31.681 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:34 compute-0 nova_compute[189016]: 2026-02-18 14:55:34.892 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:36 compute-0 nova_compute[189016]: 2026-02-18 14:55:36.687 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:37 compute-0 podman[243483]: 2026-02-18 14:55:37.742595071 +0000 UTC m=+0.068864354 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:55:37 compute-0 podman[243482]: 2026-02-18 14:55:37.771108561 +0000 UTC m=+0.101495739 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0)
Feb 18 14:55:38 compute-0 nova_compute[189016]: 2026-02-18 14:55:38.069 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:38 compute-0 nova_compute[189016]: 2026-02-18 14:55:38.072 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:38 compute-0 nova_compute[189016]: 2026-02-18 14:55:38.072 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:55:38 compute-0 nova_compute[189016]: 2026-02-18 14:55:38.600 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:55:38 compute-0 nova_compute[189016]: 2026-02-18 14:55:38.600 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:55:38 compute-0 nova_compute[189016]: 2026-02-18 14:55:38.601 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 14:55:39 compute-0 nova_compute[189016]: 2026-02-18 14:55:39.897 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:55:39 compute-0 nova_compute[189016]: 2026-02-18 14:55:39.900 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:39 compute-0 nova_compute[189016]: 2026-02-18 14:55:39.940 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:55:39 compute-0 nova_compute[189016]: 2026-02-18 14:55:39.941 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 14:55:40 compute-0 nova_compute[189016]: 2026-02-18 14:55:40.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:40 compute-0 podman[243527]: 2026-02-18 14:55:40.763052149 +0000 UTC m=+0.096157413 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 18 14:55:41 compute-0 nova_compute[189016]: 2026-02-18 14:55:41.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:41 compute-0 nova_compute[189016]: 2026-02-18 14:55:41.053 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:55:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:55:41.424 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:55:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:55:41.426 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:55:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:55:41.427 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:55:41 compute-0 nova_compute[189016]: 2026-02-18 14:55:41.691 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:43 compute-0 nova_compute[189016]: 2026-02-18 14:55:43.054 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:43 compute-0 nova_compute[189016]: 2026-02-18 14:55:43.056 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.089 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.091 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.092 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.093 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.188 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.272 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.274 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.340 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.342 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.403 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.405 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.456 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.467 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.528 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.530 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.587 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.588 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.650 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.653 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.735 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:55:44 compute-0 nova_compute[189016]: 2026-02-18 14:55:44.902 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.022 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.024 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5031MB free_disk=72.2262191772461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.024 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.025 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.175 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.177 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.177 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.178 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.233 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.262 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.264 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:55:45 compute-0 nova_compute[189016]: 2026-02-18 14:55:45.265 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:55:45 compute-0 kernel: hrtimer: interrupt took 989995 ns
Feb 18 14:55:46 compute-0 nova_compute[189016]: 2026-02-18 14:55:46.693 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:49 compute-0 nova_compute[189016]: 2026-02-18 14:55:49.905 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:51 compute-0 nova_compute[189016]: 2026-02-18 14:55:51.696 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:52 compute-0 podman[243580]: 2026-02-18 14:55:52.734099153 +0000 UTC m=+0.061034124 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:55:52 compute-0 podman[243581]: 2026-02-18 14:55:52.739218344 +0000 UTC m=+0.065618251 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, io.buildah.version=1.33.7)
Feb 18 14:55:53 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 18 14:55:54 compute-0 nova_compute[189016]: 2026-02-18 14:55:54.914 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:56 compute-0 nova_compute[189016]: 2026-02-18 14:55:56.699 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:55:58 compute-0 podman[243625]: 2026-02-18 14:55:58.769111726 +0000 UTC m=+0.089518912 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 18 14:55:59 compute-0 podman[204930]: time="2026-02-18T14:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:55:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:55:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Feb 18 14:55:59 compute-0 nova_compute[189016]: 2026-02-18 14:55:59.917 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:00 compute-0 podman[243645]: 2026-02-18 14:56:00.775498893 +0000 UTC m=+0.104619119 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, maintainer=Red Hat, Inc., release-0.7.12=, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30)
Feb 18 14:56:00 compute-0 podman[243644]: 2026-02-18 14:56:00.779956647 +0000 UTC m=+0.110036988 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 18 14:56:01 compute-0 openstack_network_exporter[208107]: ERROR   14:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:56:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:56:01 compute-0 openstack_network_exporter[208107]: ERROR   14:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:56:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:56:01 compute-0 nova_compute[189016]: 2026-02-18 14:56:01.700 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:04 compute-0 nova_compute[189016]: 2026-02-18 14:56:04.920 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:06 compute-0 nova_compute[189016]: 2026-02-18 14:56:06.703 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:08 compute-0 podman[243683]: 2026-02-18 14:56:08.734460563 +0000 UTC m=+0.062564303 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute)
Feb 18 14:56:08 compute-0 podman[243684]: 2026-02-18 14:56:08.73512087 +0000 UTC m=+0.058794596 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 14:56:09 compute-0 nova_compute[189016]: 2026-02-18 14:56:09.922 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:11 compute-0 nova_compute[189016]: 2026-02-18 14:56:11.705 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:11 compute-0 podman[243724]: 2026-02-18 14:56:11.771912316 +0000 UTC m=+0.104363623 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb 18 14:56:14 compute-0 nova_compute[189016]: 2026-02-18 14:56:14.924 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:16 compute-0 nova_compute[189016]: 2026-02-18 14:56:16.708 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:19 compute-0 nova_compute[189016]: 2026-02-18 14:56:19.928 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:21 compute-0 nova_compute[189016]: 2026-02-18 14:56:21.710 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:23 compute-0 podman[243752]: 2026-02-18 14:56:23.755027376 +0000 UTC m=+0.074552000 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 18 14:56:23 compute-0 podman[243751]: 2026-02-18 14:56:23.767115813 +0000 UTC m=+0.086573165 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 14:56:24 compute-0 nova_compute[189016]: 2026-02-18 14:56:24.933 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:26 compute-0 nova_compute[189016]: 2026-02-18 14:56:26.713 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:29 compute-0 podman[243793]: 2026-02-18 14:56:29.740568368 +0000 UTC m=+0.068506847 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 18 14:56:29 compute-0 podman[204930]: time="2026-02-18T14:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:56:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:56:29 compute-0 nova_compute[189016]: 2026-02-18 14:56:29.987 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:30 compute-0 podman[204930]: @ - - [18/Feb/2026:14:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Feb 18 14:56:31 compute-0 openstack_network_exporter[208107]: ERROR   14:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:56:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:56:31 compute-0 openstack_network_exporter[208107]: ERROR   14:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:56:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:56:31 compute-0 nova_compute[189016]: 2026-02-18 14:56:31.714 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:31 compute-0 podman[243809]: 2026-02-18 14:56:31.721781763 +0000 UTC m=+0.053324872 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 18 14:56:31 compute-0 podman[243810]: 2026-02-18 14:56:31.734420664 +0000 UTC m=+0.062522276 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, architecture=x86_64, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler)
Feb 18 14:56:34 compute-0 nova_compute[189016]: 2026-02-18 14:56:34.994 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:36 compute-0 nova_compute[189016]: 2026-02-18 14:56:36.717 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.266 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.267 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.268 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.268 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.644 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.644 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.645 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.645 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:56:39 compute-0 podman[243848]: 2026-02-18 14:56:39.769337327 +0000 UTC m=+0.087418906 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.schema-version=1.0)
Feb 18 14:56:39 compute-0 podman[243849]: 2026-02-18 14:56:39.776909089 +0000 UTC m=+0.101117663 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 14:56:39 compute-0 nova_compute[189016]: 2026-02-18 14:56:39.997 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:41 compute-0 nova_compute[189016]: 2026-02-18 14:56:41.095 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:56:41 compute-0 nova_compute[189016]: 2026-02-18 14:56:41.111 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:56:41 compute-0 nova_compute[189016]: 2026-02-18 14:56:41.112 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 14:56:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:56:41.426 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:56:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:56:41.427 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:56:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:56:41.428 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:56:41 compute-0 nova_compute[189016]: 2026-02-18 14:56:41.720 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:42 compute-0 nova_compute[189016]: 2026-02-18 14:56:42.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:42 compute-0 nova_compute[189016]: 2026-02-18 14:56:42.104 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:42 compute-0 nova_compute[189016]: 2026-02-18 14:56:42.104 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:42 compute-0 nova_compute[189016]: 2026-02-18 14:56:42.105 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:56:42 compute-0 podman[243890]: 2026-02-18 14:56:42.768045048 +0000 UTC m=+0.091967282 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 18 14:56:43 compute-0 nova_compute[189016]: 2026-02-18 14:56:43.053 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.086 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.086 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.087 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.087 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.175 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.230 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.231 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.278 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.280 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.345 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.346 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.397 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.404 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.450 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.451 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.507 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.509 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.559 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.561 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.642 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.967 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.968 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5033MB free_disk=72.22620010375977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.969 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:56:44 compute-0 nova_compute[189016]: 2026-02-18 14:56:44.969 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.000 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.050 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.050 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.051 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.051 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.118 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.134 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.136 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:56:45 compute-0 nova_compute[189016]: 2026-02-18 14:56:45.136 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:56:46 compute-0 nova_compute[189016]: 2026-02-18 14:56:46.722 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:50 compute-0 nova_compute[189016]: 2026-02-18 14:56:50.002 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:51 compute-0 nova_compute[189016]: 2026-02-18 14:56:51.725 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:54 compute-0 podman[243941]: 2026-02-18 14:56:54.734219124 +0000 UTC m=+0.055997710 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 14:56:54 compute-0 podman[243942]: 2026-02-18 14:56:54.737815015 +0000 UTC m=+0.060156815 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, architecture=x86_64, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 18 14:56:55 compute-0 nova_compute[189016]: 2026-02-18 14:56:55.003 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:56 compute-0 nova_compute[189016]: 2026-02-18 14:56:56.727 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:56:59 compute-0 podman[204930]: time="2026-02-18T14:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:56:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:56:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Feb 18 14:57:00 compute-0 nova_compute[189016]: 2026-02-18 14:57:00.008 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:00 compute-0 podman[243982]: 2026-02-18 14:57:00.762465147 +0000 UTC m=+0.095155903 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 14:57:01 compute-0 openstack_network_exporter[208107]: ERROR   14:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:57:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:57:01 compute-0 openstack_network_exporter[208107]: ERROR   14:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:57:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:57:01 compute-0 nova_compute[189016]: 2026-02-18 14:57:01.729 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:02 compute-0 podman[243998]: 2026-02-18 14:57:02.728423837 +0000 UTC m=+0.060027682 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 18 14:57:02 compute-0 podman[243999]: 2026-02-18 14:57:02.738498782 +0000 UTC m=+0.062854664 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, vendor=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, name=ubi9)
Feb 18 14:57:05 compute-0 nova_compute[189016]: 2026-02-18 14:57:05.012 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:06 compute-0 nova_compute[189016]: 2026-02-18 14:57:06.733 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:10 compute-0 nova_compute[189016]: 2026-02-18 14:57:10.017 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:10 compute-0 podman[244036]: 2026-02-18 14:57:10.737748782 +0000 UTC m=+0.066057855 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 18 14:57:10 compute-0 podman[244037]: 2026-02-18 14:57:10.767544987 +0000 UTC m=+0.089844828 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:57:11 compute-0 nova_compute[189016]: 2026-02-18 14:57:11.735 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:13 compute-0 podman[244077]: 2026-02-18 14:57:13.787532076 +0000 UTC m=+0.113895437 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 18 14:57:15 compute-0 nova_compute[189016]: 2026-02-18 14:57:15.020 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:16 compute-0 nova_compute[189016]: 2026-02-18 14:57:16.738 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:20 compute-0 nova_compute[189016]: 2026-02-18 14:57:20.023 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:21 compute-0 nova_compute[189016]: 2026-02-18 14:57:21.741 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:25 compute-0 nova_compute[189016]: 2026-02-18 14:57:25.027 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.191 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.192 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.192 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.208 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9a9ee96c-8146-46a1-a098-5d021fb5e779', 'name': 'vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.211 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.211 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.212 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.212 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.212 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T14:57:25.212523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.271 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 817918800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.271 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 168311221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.272 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 348237395 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.338 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.339 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.339 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.341 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.341 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.341 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.341 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.341 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.342 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.342 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.342 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.342 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.343 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T14:57:25.341882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.343 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.344 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.345 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.345 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.345 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.345 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.345 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.345 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.346 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.346 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T14:57:25.345713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.346 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.347 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.347 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.348 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.349 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.349 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.349 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.349 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.349 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.349 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T14:57:25.349900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.374 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.374 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.374 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.396 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.396 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.397 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.398 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.398 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.398 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.399 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.399 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.399 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.400 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T14:57:25.399730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.401 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.401 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.402 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.402 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.403 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.404 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.404 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.405 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.405 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.405 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.405 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.406 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 41816064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.406 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T14:57:25.405740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.407 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.408 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.408 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.409 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.410 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.410 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.410 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.410 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.410 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.411 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T14:57:25.411095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.415 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.419 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.420 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.420 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.420 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.420 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.421 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.421 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T14:57:25.421247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.440 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/cpu volume: 166430000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.457 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 34600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 2131671363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.458 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 22293021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.459 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.459 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.459 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.459 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.460 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.460 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.460 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.460 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.460 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.461 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T14:57:25.458505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T14:57:25.460949) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.461 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.461 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.461 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T14:57:25.462456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.462 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.463 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.464 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.464 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T14:57:25.463550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.464 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.464 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.465 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.466 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.466 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.466 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.466 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T14:57:25.465592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.466 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.466 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.467 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.467 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.467 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.467 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T14:57:25.466859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.467 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.467 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T14:57:25.467835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.468 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.468 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.468 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.468 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.469 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.469 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.469 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.469 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.469 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.469 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T14:57:25.469274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.470 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.470 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.470 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.470 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.470 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.470 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.470 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.471 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.471 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.471 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.471 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.471 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.471 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.472 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.472 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.472 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.472 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.472 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.472 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.472 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.473 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T14:57:25.470508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.473 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T14:57:25.471694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.473 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.473 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.473 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.473 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T14:57:25.473214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/memory.usage volume: 49.09375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T14:57:25.474434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.474 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.91796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.475 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.475 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.475 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.475 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.475 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.475 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.475 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.476 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.476 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.476 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.476 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.476 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.477 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.477 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T14:57:25.475531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.477 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.477 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.477 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.477 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T14:57:25.477878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.478 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.479 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.479 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T14:57:25.479041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.479 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.479 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.479 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T14:57:25.480250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.480 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.481 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.481 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.481 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.481 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.481 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.481 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.481 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.482 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.483 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.483 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.483 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.483 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.483 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.483 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:57:25.483 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:57:25 compute-0 podman[244104]: 2026-02-18 14:57:25.747795352 +0000 UTC m=+0.070978719 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 14:57:25 compute-0 podman[244105]: 2026-02-18 14:57:25.798387084 +0000 UTC m=+0.109900976 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, config_id=openstack_network_exporter, io.openshift.expose-services=)
Feb 18 14:57:26 compute-0 nova_compute[189016]: 2026-02-18 14:57:26.743 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:29 compute-0 podman[204930]: time="2026-02-18T14:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:57:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:57:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Feb 18 14:57:30 compute-0 nova_compute[189016]: 2026-02-18 14:57:30.031 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:31 compute-0 openstack_network_exporter[208107]: ERROR   14:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:57:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:57:31 compute-0 openstack_network_exporter[208107]: ERROR   14:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:57:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:57:31 compute-0 podman[244148]: 2026-02-18 14:57:31.721323418 +0000 UTC m=+0.051452295 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 14:57:31 compute-0 nova_compute[189016]: 2026-02-18 14:57:31.747 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:33 compute-0 podman[244167]: 2026-02-18 14:57:33.75441258 +0000 UTC m=+0.084821831 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 14:57:33 compute-0 podman[244168]: 2026-02-18 14:57:33.765373507 +0000 UTC m=+0.094109266 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=kepler, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 18 14:57:35 compute-0 nova_compute[189016]: 2026-02-18 14:57:35.035 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:36 compute-0 nova_compute[189016]: 2026-02-18 14:57:36.748 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:40 compute-0 nova_compute[189016]: 2026-02-18 14:57:40.040 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:41 compute-0 nova_compute[189016]: 2026-02-18 14:57:41.130 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:41 compute-0 nova_compute[189016]: 2026-02-18 14:57:41.131 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:41 compute-0 nova_compute[189016]: 2026-02-18 14:57:41.132 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:57:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:57:41.427 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:57:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:57:41.429 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:57:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:57:41.430 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:57:41 compute-0 nova_compute[189016]: 2026-02-18 14:57:41.706 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:57:41 compute-0 nova_compute[189016]: 2026-02-18 14:57:41.706 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:57:41 compute-0 nova_compute[189016]: 2026-02-18 14:57:41.706 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 14:57:41 compute-0 nova_compute[189016]: 2026-02-18 14:57:41.750 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:41 compute-0 podman[244205]: 2026-02-18 14:57:41.772119978 +0000 UTC m=+0.091286655 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 14:57:41 compute-0 podman[244204]: 2026-02-18 14:57:41.772324483 +0000 UTC m=+0.100822126 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 18 14:57:42 compute-0 nova_compute[189016]: 2026-02-18 14:57:42.732 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:57:42 compute-0 nova_compute[189016]: 2026-02-18 14:57:42.755 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:57:42 compute-0 nova_compute[189016]: 2026-02-18 14:57:42.755 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 14:57:42 compute-0 nova_compute[189016]: 2026-02-18 14:57:42.756 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:42 compute-0 nova_compute[189016]: 2026-02-18 14:57:42.756 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:42 compute-0 nova_compute[189016]: 2026-02-18 14:57:42.757 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:57:43 compute-0 nova_compute[189016]: 2026-02-18 14:57:43.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:44 compute-0 podman[244244]: 2026-02-18 14:57:44.776844011 +0000 UTC m=+0.098764014 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.042 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.212 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.213 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.213 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.214 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.291 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.385 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.386 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.434 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.435 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.485 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.486 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.544 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.550 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.595 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.596 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.646 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.648 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.710 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.710 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:57:45 compute-0 nova_compute[189016]: 2026-02-18 14:57:45.763 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.060 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.062 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5019MB free_disk=72.2262191772461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.063 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.063 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.146 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.147 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.147 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.148 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.187 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing inventories for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.206 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating ProviderTree inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.206 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.219 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing aggregate associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.238 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing trait associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, traits: HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.286 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.303 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.305 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.306 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:57:46 compute-0 nova_compute[189016]: 2026-02-18 14:57:46.752 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:47 compute-0 nova_compute[189016]: 2026-02-18 14:57:47.306 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:57:50 compute-0 nova_compute[189016]: 2026-02-18 14:57:50.046 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:51 compute-0 nova_compute[189016]: 2026-02-18 14:57:51.753 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:55 compute-0 nova_compute[189016]: 2026-02-18 14:57:55.050 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:56 compute-0 podman[244299]: 2026-02-18 14:57:56.740819671 +0000 UTC m=+0.066453435 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 14:57:56 compute-0 nova_compute[189016]: 2026-02-18 14:57:56.755 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:57:56 compute-0 podman[244300]: 2026-02-18 14:57:56.76404653 +0000 UTC m=+0.072868418 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, name=ubi9/ubi-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.7, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 14:57:59 compute-0 podman[204930]: time="2026-02-18T14:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:57:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:57:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Feb 18 14:58:00 compute-0 nova_compute[189016]: 2026-02-18 14:58:00.056 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:01 compute-0 openstack_network_exporter[208107]: ERROR   14:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:58:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:58:01 compute-0 openstack_network_exporter[208107]: ERROR   14:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:58:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:58:01 compute-0 nova_compute[189016]: 2026-02-18 14:58:01.758 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:02 compute-0 podman[244341]: 2026-02-18 14:58:02.751677314 +0000 UTC m=+0.073103824 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 18 14:58:04 compute-0 podman[244361]: 2026-02-18 14:58:04.750124506 +0000 UTC m=+0.070544969 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, container_name=kepler, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Feb 18 14:58:04 compute-0 podman[244360]: 2026-02-18 14:58:04.750127506 +0000 UTC m=+0.076298594 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Feb 18 14:58:05 compute-0 nova_compute[189016]: 2026-02-18 14:58:05.060 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:06 compute-0 nova_compute[189016]: 2026-02-18 14:58:06.761 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:10 compute-0 nova_compute[189016]: 2026-02-18 14:58:10.063 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:11 compute-0 nova_compute[189016]: 2026-02-18 14:58:11.763 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:12 compute-0 podman[244397]: 2026-02-18 14:58:12.74558216 +0000 UTC m=+0.072465187 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 18 14:58:12 compute-0 podman[244398]: 2026-02-18 14:58:12.767857245 +0000 UTC m=+0.084639376 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 14:58:15 compute-0 nova_compute[189016]: 2026-02-18 14:58:15.065 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:15 compute-0 podman[244437]: 2026-02-18 14:58:15.752875078 +0000 UTC m=+0.080237544 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 18 14:58:16 compute-0 nova_compute[189016]: 2026-02-18 14:58:16.767 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:20 compute-0 nova_compute[189016]: 2026-02-18 14:58:20.070 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:21 compute-0 nova_compute[189016]: 2026-02-18 14:58:21.767 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:21 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:21.774 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 14:58:21 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:21.780 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 14:58:25 compute-0 nova_compute[189016]: 2026-02-18 14:58:25.076 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.553 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.555 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.578 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.672 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.673 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.686 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.687 189020 INFO nova.compute.claims [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.773 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.890 189020 DEBUG nova.compute.provider_tree [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.907 189020 DEBUG nova.scheduler.client.report [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.932 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.934 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.972 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.973 189020 DEBUG nova.network.neutron [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 14:58:26 compute-0 nova_compute[189016]: 2026-02-18 14:58:26.991 189020 INFO nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.028 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.114 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.116 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.117 189020 INFO nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Creating image(s)#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.118 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.118 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.119 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.142 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.216 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.217 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "8e446fd4a49ba04578b223406ce2c408026401e6" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.218 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.231 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.278 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.280 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.326 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.328 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.329 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.378 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.379 189020 DEBUG nova.virt.disk.api [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Checking if we can resize image /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.380 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.428 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.429 189020 DEBUG nova.virt.disk.api [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Cannot resize image /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.430 189020 DEBUG nova.objects.instance [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'migration_context' on Instance uuid c469573f-54e2-4c7f-9223-77500b7b9ea2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.449 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.450 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.451 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.464 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.513 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.514 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.515 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.526 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.588 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.590 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.629 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.630 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.631 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.685 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.687 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.687 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Ensure instance console log exists: /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.688 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.688 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:27 compute-0 nova_compute[189016]: 2026-02-18 14:58:27.688 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:27 compute-0 podman[244489]: 2026-02-18 14:58:27.746921562 +0000 UTC m=+0.067686877 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, release=1770267347, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.7, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 18 14:58:27 compute-0 podman[244488]: 2026-02-18 14:58:27.780430511 +0000 UTC m=+0.104945691 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 14:58:29 compute-0 podman[204930]: time="2026-02-18T14:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:58:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:58:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.079 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.753 189020 DEBUG nova.network.neutron [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Successfully updated port: f12fffeb-5027-4ab9-8d51-b603e9bfbedd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.768 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.769 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquired lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.769 189020 DEBUG nova.network.neutron [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.837 189020 DEBUG nova.compute.manager [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received event network-changed-f12fffeb-5027-4ab9-8d51-b603e9bfbedd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.838 189020 DEBUG nova.compute.manager [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Refreshing instance network info cache due to event network-changed-f12fffeb-5027-4ab9-8d51-b603e9bfbedd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.839 189020 DEBUG oslo_concurrency.lockutils [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:58:30 compute-0 nova_compute[189016]: 2026-02-18 14:58:30.910 189020 DEBUG nova.network.neutron [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 14:58:31 compute-0 openstack_network_exporter[208107]: ERROR   14:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:58:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:58:31 compute-0 openstack_network_exporter[208107]: ERROR   14:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:58:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:58:31 compute-0 nova_compute[189016]: 2026-02-18 14:58:31.773 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:31 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:31.785 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.940 189020 DEBUG nova.network.neutron [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updating instance_info_cache with network_info: [{"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.965 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Releasing lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.966 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Instance network_info: |[{"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.966 189020 DEBUG oslo_concurrency.lockutils [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.967 189020 DEBUG nova.network.neutron [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Refreshing network info cache for port f12fffeb-5027-4ab9-8d51-b603e9bfbedd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.970 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Start _get_guest_xml network_info=[{"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}], 'ephemerals': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 1, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.978 189020 WARNING nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.988 189020 DEBUG nova.virt.libvirt.host [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.989 189020 DEBUG nova.virt.libvirt.host [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.993 189020 DEBUG nova.virt.libvirt.host [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.994 189020 DEBUG nova.virt.libvirt.host [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.995 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.995 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T14:51:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='23e98520-0527-4596-8420-5ff1feeb3155',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.996 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.996 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.997 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.997 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.998 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.998 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.999 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 14:58:32 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.999 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:32.999 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.000 189020 DEBUG nova.virt.hardware [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.004 189020 DEBUG nova.virt.libvirt.vif [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T14:58:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p',id=3,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-b2fajt5r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T14:58:27Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjk3NTA1Mjg0NDg2ODU2NjM1Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Feb 18 14:58:33 compute-0 nova_compute[189016]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjk3NTA1Mjg0NDg2ODU2NjM1Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=c469573f-54e2-4c7f-9223-77500b7b9ea2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.005 189020 DEBUG nova.network.os_vif_util [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.007 189020 DEBUG nova.network.os_vif_util [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:5f:2a,bridge_name='br-int',has_traffic_filtering=True,id=f12fffeb-5027-4ab9-8d51-b603e9bfbedd,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf12fffeb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.008 189020 DEBUG nova.objects.instance [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c469573f-54e2-4c7f-9223-77500b7b9ea2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.022 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] End _get_guest_xml xml=<domain type="kvm">
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <uuid>c469573f-54e2-4c7f-9223-77500b7b9ea2</uuid>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <name>instance-00000003</name>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <memory>524288</memory>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <metadata>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <nova:name>vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p</nova:name>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 14:58:32</nova:creationTime>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <nova:flavor name="m1.small">
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:memory>512</nova:memory>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:ephemeral>1</nova:ephemeral>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:user uuid="387d978e2b494e88ad13abae2a83321d">admin</nova:user>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:project uuid="71c6c5d63b07447388ace322f081ffc3">admin</nova:project>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="7cc2a96a-1e6c-474d-b671-0e2626bf4158"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        <nova:port uuid="f12fffeb-5027-4ab9-8d51-b603e9bfbedd">
Feb 18 14:58:33 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="192.168.0.126" ipVersion="4"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  </metadata>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <system>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <entry name="serial">c469573f-54e2-4c7f-9223-77500b7b9ea2</entry>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <entry name="uuid">c469573f-54e2-4c7f-9223-77500b7b9ea2</entry>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </system>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <os>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  </os>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <features>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <apic/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  </features>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  </clock>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  </cpu>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  <devices>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <target dev="vdb" bus="virtio"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.config"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </disk>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:8f:5f:2a"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <target dev="tapf12fffeb-50"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </interface>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/console.log" append="off"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </serial>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <video>
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </video>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </rng>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 14:58:33 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 14:58:33 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 14:58:33 compute-0 nova_compute[189016]:  </devices>
Feb 18 14:58:33 compute-0 nova_compute[189016]: </domain>
Feb 18 14:58:33 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.029 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Preparing to wait for external event network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.030 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.031 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.031 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.032 189020 DEBUG nova.virt.libvirt.vif [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T14:58:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p',id=3,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-b2fajt5r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T14:58:27Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjk3NTA1Mjg0NDg2ODU2NjM1Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Feb 18 14:58:33 compute-0 nova_compute[189016]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjk3NTA1Mjg0NDg2ODU2NjM1Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=c469573f-54e2-4c7f-9223-77500b7b9ea2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.033 189020 DEBUG nova.network.os_vif_util [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.033 189020 DEBUG nova.network.os_vif_util [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:5f:2a,bridge_name='br-int',has_traffic_filtering=True,id=f12fffeb-5027-4ab9-8d51-b603e9bfbedd,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf12fffeb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.034 189020 DEBUG os_vif [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:5f:2a,bridge_name='br-int',has_traffic_filtering=True,id=f12fffeb-5027-4ab9-8d51-b603e9bfbedd,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf12fffeb-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.035 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.036 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.036 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.042 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.043 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf12fffeb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.043 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf12fffeb-50, col_values=(('external_ids', {'iface-id': 'f12fffeb-5027-4ab9-8d51-b603e9bfbedd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:5f:2a', 'vm-uuid': 'c469573f-54e2-4c7f-9223-77500b7b9ea2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.045 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.047 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 14:58:33 compute-0 NetworkManager[57258]: <info>  [1771426713.0485] manager: (tapf12fffeb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.068 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.069 189020 INFO os_vif [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:5f:2a,bridge_name='br-int',has_traffic_filtering=True,id=f12fffeb-5027-4ab9-8d51-b603e9bfbedd,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf12fffeb-50')#033[00m
Feb 18 14:58:33 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 14:58:33.004 189020 DEBUG nova.virt.libvirt.vif [None req-9e2dfba9-6c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 14:58:33 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 14:58:33.032 189020 DEBUG nova.virt.libvirt.vif [None req-9e2dfba9-6c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.130 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.131 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.132 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.132 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No VIF found with MAC fa:16:3e:8f:5f:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.133 189020 INFO nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Using config drive#033[00m
Feb 18 14:58:33 compute-0 podman[244538]: 2026-02-18 14:58:33.731086173 +0000 UTC m=+0.062233018 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.854 189020 INFO nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Creating config drive at /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.config#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.859 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2lttgww3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:33 compute-0 nova_compute[189016]: 2026-02-18 14:58:33.981 189020 DEBUG oslo_concurrency.processutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2lttgww3" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:34 compute-0 kernel: tapf12fffeb-50: entered promiscuous mode
Feb 18 14:58:34 compute-0 ovn_controller[99062]: 2026-02-18T14:58:34Z|00040|binding|INFO|Claiming lport f12fffeb-5027-4ab9-8d51-b603e9bfbedd for this chassis.
Feb 18 14:58:34 compute-0 ovn_controller[99062]: 2026-02-18T14:58:34Z|00041|binding|INFO|f12fffeb-5027-4ab9-8d51-b603e9bfbedd: Claiming fa:16:3e:8f:5f:2a 192.168.0.126
Feb 18 14:58:34 compute-0 NetworkManager[57258]: <info>  [1771426714.0586] manager: (tapf12fffeb-50): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.061 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:34 compute-0 ovn_controller[99062]: 2026-02-18T14:58:34Z|00042|binding|INFO|Setting lport f12fffeb-5027-4ab9-8d51-b603e9bfbedd ovn-installed in OVS
Feb 18 14:58:34 compute-0 ovn_controller[99062]: 2026-02-18T14:58:34Z|00043|binding|INFO|Setting lport f12fffeb-5027-4ab9-8d51-b603e9bfbedd up in Southbound
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.070 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:5f:2a 192.168.0.126'], port_security=['fa:16:3e:8f:5f:2a 192.168.0.126'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-g63ccmz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-port-nheb7ynxu42a', 'neutron:cidrs': '192.168.0.126/24', 'neutron:device_id': 'c469573f-54e2-4c7f-9223-77500b7b9ea2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-g63ccmz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-port-nheb7ynxu42a', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=f12fffeb-5027-4ab9-8d51-b603e9bfbedd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.071 108400 INFO neutron.agent.ovn.metadata.agent [-] Port f12fffeb-5027-4ab9-8d51-b603e9bfbedd in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 bound to our chassis#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.073 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c269c00a-f738-4cb6-ac67-09050c56f9f2#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.076 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.091 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[55146dd2-d925-4aa1-9488-2799a4520b57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:58:34 compute-0 systemd-udevd[244577]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 14:58:34 compute-0 systemd-machined[158361]: New machine qemu-3-instance-00000003.
Feb 18 14:58:34 compute-0 NetworkManager[57258]: <info>  [1771426714.1107] device (tapf12fffeb-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 14:58:34 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb 18 14:58:34 compute-0 NetworkManager[57258]: <info>  [1771426714.1156] device (tapf12fffeb-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.118 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[de1de692-4d9b-451f-8607-bc99959778e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.122 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[40c882e1-b5ba-4463-af7a-e4108bf9c88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.146 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa165e2-5e89-41a7-b3d4-808ed92b35a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.163 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[52842c48-8209-4108-a854-fb3f5ba5e44c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 36950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244586, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.187 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[5774d38f-7f44-468c-a417-8bcbb934b13d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346658, 'tstamp': 346658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244590, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346660, 'tstamp': 346660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244590, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.190 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.191 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.193 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc269c00a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.194 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.194 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc269c00a-f0, col_values=(('external_ids', {'iface-id': '7e592dc1-2432-46dc-b338-f9a04aad5932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 14:58:34 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:34.195 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.647 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426714.6471844, c469573f-54e2-4c7f-9223-77500b7b9ea2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.648 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] VM Started (Lifecycle Event)#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.684 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.690 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426714.6474, c469573f-54e2-4c7f-9223-77500b7b9ea2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.691 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] VM Paused (Lifecycle Event)#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.705 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.711 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.728 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.937 189020 DEBUG nova.compute.manager [req-44bfb865-3ecb-48ed-8512-029fe9d34f61 req-e755fbb8-ef55-4cea-bb01-c91567da51df af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received event network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.937 189020 DEBUG oslo_concurrency.lockutils [req-44bfb865-3ecb-48ed-8512-029fe9d34f61 req-e755fbb8-ef55-4cea-bb01-c91567da51df af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.938 189020 DEBUG oslo_concurrency.lockutils [req-44bfb865-3ecb-48ed-8512-029fe9d34f61 req-e755fbb8-ef55-4cea-bb01-c91567da51df af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.939 189020 DEBUG oslo_concurrency.lockutils [req-44bfb865-3ecb-48ed-8512-029fe9d34f61 req-e755fbb8-ef55-4cea-bb01-c91567da51df af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.939 189020 DEBUG nova.compute.manager [req-44bfb865-3ecb-48ed-8512-029fe9d34f61 req-e755fbb8-ef55-4cea-bb01-c91567da51df af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Processing event network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.940 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.946 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.947 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426714.9468389, c469573f-54e2-4c7f-9223-77500b7b9ea2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.948 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] VM Resumed (Lifecycle Event)#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.954 189020 INFO nova.virt.libvirt.driver [-] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Instance spawned successfully.#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.955 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.974 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.990 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.997 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:58:34 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.998 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:34.999 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.000 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.001 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.003 189020 DEBUG nova.virt.libvirt.driver [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.035 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.090 189020 INFO nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.091 189020 DEBUG nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.153 189020 INFO nova.compute.manager [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Took 8.52 seconds to build instance.#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.171 189020 DEBUG oslo_concurrency.lockutils [None req-9e2dfba9-6ccf-427b-883a-d4a8f8f57f1b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:35 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 18 14:58:35 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 18 14:58:35 compute-0 podman[244599]: 2026-02-18 14:58:35.761997485 +0000 UTC m=+0.096233696 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 18 14:58:35 compute-0 podman[244600]: 2026-02-18 14:58:35.765474378 +0000 UTC m=+0.097859245 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, vcs-type=git, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.871 189020 DEBUG nova.network.neutron [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updated VIF entry in instance network info cache for port f12fffeb-5027-4ab9-8d51-b603e9bfbedd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.873 189020 DEBUG nova.network.neutron [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updating instance_info_cache with network_info: [{"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:58:35 compute-0 nova_compute[189016]: 2026-02-18 14:58:35.892 189020 DEBUG oslo_concurrency.lockutils [req-14f42988-a01c-4250-829b-b94424c2cc7b req-8501b39c-d691-454f-b0c4-2691c17f35ad af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:58:36 compute-0 nova_compute[189016]: 2026-02-18 14:58:36.775 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:37 compute-0 nova_compute[189016]: 2026-02-18 14:58:37.030 189020 DEBUG nova.compute.manager [req-66b136a0-6bdd-41d8-8332-f72d669ea683 req-eb73ebe3-d2ae-4fd7-9aa5-3ce137be3df0 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received event network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 14:58:37 compute-0 nova_compute[189016]: 2026-02-18 14:58:37.030 189020 DEBUG oslo_concurrency.lockutils [req-66b136a0-6bdd-41d8-8332-f72d669ea683 req-eb73ebe3-d2ae-4fd7-9aa5-3ce137be3df0 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:37 compute-0 nova_compute[189016]: 2026-02-18 14:58:37.032 189020 DEBUG oslo_concurrency.lockutils [req-66b136a0-6bdd-41d8-8332-f72d669ea683 req-eb73ebe3-d2ae-4fd7-9aa5-3ce137be3df0 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:37 compute-0 nova_compute[189016]: 2026-02-18 14:58:37.032 189020 DEBUG oslo_concurrency.lockutils [req-66b136a0-6bdd-41d8-8332-f72d669ea683 req-eb73ebe3-d2ae-4fd7-9aa5-3ce137be3df0 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:37 compute-0 nova_compute[189016]: 2026-02-18 14:58:37.032 189020 DEBUG nova.compute.manager [req-66b136a0-6bdd-41d8-8332-f72d669ea683 req-eb73ebe3-d2ae-4fd7-9aa5-3ce137be3df0 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] No waiting events found dispatching network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 14:58:37 compute-0 nova_compute[189016]: 2026-02-18 14:58:37.033 189020 WARNING nova.compute.manager [req-66b136a0-6bdd-41d8-8332-f72d669ea683 req-eb73ebe3-d2ae-4fd7-9aa5-3ce137be3df0 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received unexpected event network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd for instance with vm_state active and task_state None.#033[00m
Feb 18 14:58:38 compute-0 nova_compute[189016]: 2026-02-18 14:58:38.046 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:40 compute-0 nova_compute[189016]: 2026-02-18 14:58:40.044 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:41.429 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:41.431 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:58:41.432 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:41 compute-0 nova_compute[189016]: 2026-02-18 14:58:41.778 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:42 compute-0 nova_compute[189016]: 2026-02-18 14:58:42.049 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:42 compute-0 nova_compute[189016]: 2026-02-18 14:58:42.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:58:42 compute-0 nova_compute[189016]: 2026-02-18 14:58:42.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 14:58:42 compute-0 nova_compute[189016]: 2026-02-18 14:58:42.755 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:58:42 compute-0 nova_compute[189016]: 2026-02-18 14:58:42.755 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:58:42 compute-0 nova_compute[189016]: 2026-02-18 14:58:42.755 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 14:58:42 compute-0 nova_compute[189016]: 2026-02-18 14:58:42.756 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 14:58:43 compute-0 nova_compute[189016]: 2026-02-18 14:58:43.048 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:43 compute-0 podman[244654]: 2026-02-18 14:58:43.744648148 +0000 UTC m=+0.068536900 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS)
Feb 18 14:58:43 compute-0 podman[244655]: 2026-02-18 14:58:43.764357632 +0000 UTC m=+0.088246344 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:58:44 compute-0 nova_compute[189016]: 2026-02-18 14:58:44.097 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:58:44 compute-0 nova_compute[189016]: 2026-02-18 14:58:44.117 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:58:44 compute-0 nova_compute[189016]: 2026-02-18 14:58:44.118 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 14:58:44 compute-0 nova_compute[189016]: 2026-02-18 14:58:44.120 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:44 compute-0 nova_compute[189016]: 2026-02-18 14:58:44.121 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:44 compute-0 nova_compute[189016]: 2026-02-18 14:58:44.122 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:44 compute-0 nova_compute[189016]: 2026-02-18 14:58:44.122 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:58:45 compute-0 nova_compute[189016]: 2026-02-18 14:58:45.053 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:45 compute-0 nova_compute[189016]: 2026-02-18 14:58:45.054 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.069 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.070 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.097 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.097 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.098 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.098 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.200 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 podman[244694]: 2026-02-18 14:58:46.252265637 +0000 UTC m=+0.092968517 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.261 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.262 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.313 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.314 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.365 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.368 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.430 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.437 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.488 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.489 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.547 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.548 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.605 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.611 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.670 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.678 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.737 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.739 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.782 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.799 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.800 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.851 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.853 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:58:46 compute-0 nova_compute[189016]: 2026-02-18 14:58:46.907 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.254 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.258 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4899MB free_disk=72.22532272338867GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.258 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.259 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.346 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.347 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.348 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.348 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.349 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.420 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.434 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.458 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:58:47 compute-0 nova_compute[189016]: 2026-02-18 14:58:47.458 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:58:48 compute-0 nova_compute[189016]: 2026-02-18 14:58:48.051 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:51 compute-0 nova_compute[189016]: 2026-02-18 14:58:51.783 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:53 compute-0 nova_compute[189016]: 2026-02-18 14:58:53.053 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:56 compute-0 nova_compute[189016]: 2026-02-18 14:58:56.785 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:58 compute-0 nova_compute[189016]: 2026-02-18 14:58:58.055 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:58:58 compute-0 podman[244759]: 2026-02-18 14:58:58.743243374 +0000 UTC m=+0.074057973 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 14:58:58 compute-0 podman[244760]: 2026-02-18 14:58:58.77843281 +0000 UTC m=+0.098292385 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Feb 18 14:58:59 compute-0 podman[204930]: time="2026-02-18T14:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:58:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:58:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Feb 18 14:59:01 compute-0 openstack_network_exporter[208107]: ERROR   14:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:59:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:59:01 compute-0 openstack_network_exporter[208107]: ERROR   14:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:59:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:59:02 compute-0 nova_compute[189016]: 2026-02-18 14:59:02.015 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:03 compute-0 nova_compute[189016]: 2026-02-18 14:59:03.057 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:04 compute-0 ovn_controller[99062]: 2026-02-18T14:59:04Z|00044|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 18 14:59:04 compute-0 podman[244806]: 2026-02-18 14:59:04.733343139 +0000 UTC m=+0.058738214 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 18 14:59:06 compute-0 podman[244826]: 2026-02-18 14:59:06.756893194 +0000 UTC m=+0.086602954 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 18 14:59:06 compute-0 podman[244827]: 2026-02-18 14:59:06.7837315 +0000 UTC m=+0.111881692 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release-0.7.12=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9, config_id=kepler, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, architecture=x86_64)
Feb 18 14:59:07 compute-0 nova_compute[189016]: 2026-02-18 14:59:07.018 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:07 compute-0 ovn_controller[99062]: 2026-02-18T14:59:07Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:5f:2a 192.168.0.126
Feb 18 14:59:07 compute-0 ovn_controller[99062]: 2026-02-18T14:59:07Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:5f:2a 192.168.0.126
Feb 18 14:59:08 compute-0 nova_compute[189016]: 2026-02-18 14:59:08.060 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:12 compute-0 nova_compute[189016]: 2026-02-18 14:59:12.022 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:13 compute-0 nova_compute[189016]: 2026-02-18 14:59:13.064 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:14 compute-0 podman[244867]: 2026-02-18 14:59:14.741709851 +0000 UTC m=+0.062099165 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 14:59:14 compute-0 podman[244866]: 2026-02-18 14:59:14.750417091 +0000 UTC m=+0.074014132 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Feb 18 14:59:16 compute-0 podman[244907]: 2026-02-18 14:59:16.763828591 +0000 UTC m=+0.092756193 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 14:59:17 compute-0 nova_compute[189016]: 2026-02-18 14:59:17.025 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:18 compute-0 nova_compute[189016]: 2026-02-18 14:59:18.068 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:22 compute-0 nova_compute[189016]: 2026-02-18 14:59:22.028 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:23 compute-0 nova_compute[189016]: 2026-02-18 14:59:23.071 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.192 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.193 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.193 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.203 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9a9ee96c-8146-46a1-a098-5d021fb5e779', 'name': 'vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.206 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c469573f-54e2-4c7f-9223-77500b7b9ea2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.209 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c469573f-54e2-4c7f-9223-77500b7b9ea2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fc9f399571f1151dad02e1ad7f2b10f5a5ac66aa5da5d4c981c78739a1fdba51" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.913 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 18 Feb 2026 14:59:25 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2f8ea02d-6ad9-4578-afff-3e2d1254c6ce x-openstack-request-id: req-2f8ea02d-6ad9-4578-afff-3e2d1254c6ce _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.914 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c469573f-54e2-4c7f-9223-77500b7b9ea2", "name": "vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p", "status": "ACTIVE", "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "user_id": "387d978e2b494e88ad13abae2a83321d", "metadata": {"metering.server_group": "449d1667-0173-4809-b0e3-b50e27381afa"}, "hostId": "446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd", "image": {"id": "7cc2a96a-1e6c-474d-b671-0e2626bf4158", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7cc2a96a-1e6c-474d-b671-0e2626bf4158"}]}, "flavor": {"id": "23e98520-0527-4596-8420-5ff1feeb3155", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/23e98520-0527-4596-8420-5ff1feeb3155"}]}, "created": "2026-02-18T14:58:25Z", "updated": "2026-02-18T14:58:35Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.126", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8f:5f:2a"}, {"version": 4, "addr": "192.168.122.198", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:8f:5f:2a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c469573f-54e2-4c7f-9223-77500b7b9ea2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c469573f-54e2-4c7f-9223-77500b7b9ea2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-18T14:58:35.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.914 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c469573f-54e2-4c7f-9223-77500b7b9ea2 used request id req-2f8ea02d-6ad9-4578-afff-3e2d1254c6ce request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.916 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c469573f-54e2-4c7f-9223-77500b7b9ea2', 'name': 'vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.922 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.923 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.923 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.923 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.924 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:25.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T14:59:25.924205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.030 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 817918800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.030 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 168311221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.031 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 348237395 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.093 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 585395980 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.093 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 111231743 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.094 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 75467101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.153 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.154 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.154 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.155 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.155 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.155 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.155 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.155 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.155 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.156 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.156 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.156 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T14:59:26.155803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.156 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.156 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.157 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.157 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.157 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.157 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.158 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.158 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.158 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.158 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.158 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.158 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.159 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T14:59:26.158659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.159 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.159 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.159 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.160 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.160 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.160 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.160 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.160 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.161 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.161 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.161 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.161 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.161 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T14:59:26.161340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.188 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.188 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.189 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.225 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 21897216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.226 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.226 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.247 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.247 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.248 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.249 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.249 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.249 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.249 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.249 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.249 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.249 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.250 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.250 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T14:59:26.249695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.251 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.251 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.251 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.251 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.251 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.252 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.252 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.252 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.253 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.253 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.253 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.253 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.253 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 41816064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.253 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.254 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.254 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.254 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.255 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T14:59:26.253385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.255 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.255 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.256 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.256 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.256 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.256 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.257 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.257 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.257 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T14:59:26.257240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.262 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.265 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for c469573f-54e2-4c7f-9223-77500b7b9ea2 / tapf12fffeb-50 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.265 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.270 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.270 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.270 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.270 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.271 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.271 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.271 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T14:59:26.271278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.296 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/cpu volume: 286940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.319 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/cpu volume: 32000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.338 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 35950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.339 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.340 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.340 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.340 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.340 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.340 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.341 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 2131671363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.341 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 22293021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T14:59:26.340703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.341 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.342 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 2041830352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.342 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 9395246 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.342 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.342 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.343 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.343 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.343 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.344 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.344 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.344 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.344 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.344 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.344 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.345 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.345 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.346 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.346 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.346 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.346 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T14:59:26.344631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.347 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.348 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.348 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T14:59:26.347902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.349 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.349 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.350 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.350 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.350 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.350 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.350 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.350 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.351 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T14:59:26.350859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.352 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.352 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.352 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 222 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.352 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.353 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.353 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.353 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.353 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.354 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.354 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.355 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.355 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.355 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.355 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.355 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.355 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.356 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T14:59:26.355310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.357 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.357 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.357 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.357 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.357 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.357 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.358 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T14:59:26.357695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.358 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.359 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.359 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.359 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.359 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T14:59:26.359261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.359 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.360 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.360 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.360 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.360 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.360 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.361 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.361 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.361 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.361 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes volume: 4962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T14:59:26.361265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.362 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.362 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.362 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.363 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.363 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.363 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.363 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.363 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T14:59:26.363403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.364 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.364 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.364 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.364 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.365 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.365 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T14:59:26.365153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.365 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.365 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.366 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.366 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.366 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.366 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.367 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.367 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.367 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.367 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-18T14:59:26.367205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.368 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p>]
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.368 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.368 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.368 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.368 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.369 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.369 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.369 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T14:59:26.368999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.370 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.370 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.370 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.370 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.370 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.370 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.371 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.371 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/memory.usage volume: 49.09375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T14:59:26.370932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.371 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/memory.usage volume: 49.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.371 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.372 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.372 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.372 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.372 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.372 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.372 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.372 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.373 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T14:59:26.372600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.373 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.373 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.374 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.374 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.374 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.374 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.375 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.375 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.375 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.376 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.376 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.376 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.376 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.376 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.376 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T14:59:26.376389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.377 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.378 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.378 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.378 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.378 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.378 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.378 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T14:59:26.378709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.379 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.379 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.379 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.380 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.380 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.380 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.380 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.380 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.380 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.380 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.381 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T14:59:26.380728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.381 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.382 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.382 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.382 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.382 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.382 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.382 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.382 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.383 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p>]
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.383 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.384 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.384 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.384 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-18T14:59:26.382811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.384 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.385 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.386 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.387 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.387 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 14:59:26.387 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 14:59:27 compute-0 nova_compute[189016]: 2026-02-18 14:59:27.032 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:28 compute-0 nova_compute[189016]: 2026-02-18 14:59:28.075 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:29 compute-0 podman[204930]: time="2026-02-18T14:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:59:29 compute-0 podman[244935]: 2026-02-18 14:59:29.761176888 +0000 UTC m=+0.077959236 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., version=9.7, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, release=1770267347, io.openshift.expose-services=)
Feb 18 14:59:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:59:29 compute-0 podman[204930]: @ - - [18/Feb/2026:14:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 18 14:59:29 compute-0 podman[244934]: 2026-02-18 14:59:29.7924348 +0000 UTC m=+0.103147402 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 14:59:31 compute-0 openstack_network_exporter[208107]: ERROR   14:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 14:59:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:59:31 compute-0 openstack_network_exporter[208107]: ERROR   14:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 14:59:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 14:59:32 compute-0 nova_compute[189016]: 2026-02-18 14:59:32.034 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:33 compute-0 nova_compute[189016]: 2026-02-18 14:59:33.079 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:35 compute-0 podman[244976]: 2026-02-18 14:59:35.773835757 +0000 UTC m=+0.090083018 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 18 14:59:37 compute-0 nova_compute[189016]: 2026-02-18 14:59:37.037 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:37 compute-0 podman[244996]: 2026-02-18 14:59:37.141230918 +0000 UTC m=+0.075647131 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 14:59:37 compute-0 podman[244997]: 2026-02-18 14:59:37.1633298 +0000 UTC m=+0.093591553 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=)
Feb 18 14:59:38 compute-0 nova_compute[189016]: 2026-02-18 14:59:38.083 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:59:41.433 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:59:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:59:41.437 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:59:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 14:59:41.441 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:59:42 compute-0 nova_compute[189016]: 2026-02-18 14:59:42.052 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:42 compute-0 nova_compute[189016]: 2026-02-18 14:59:42.459 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:43 compute-0 nova_compute[189016]: 2026-02-18 14:59:43.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:43 compute-0 nova_compute[189016]: 2026-02-18 14:59:43.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 14:59:43 compute-0 nova_compute[189016]: 2026-02-18 14:59:43.086 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:43 compute-0 nova_compute[189016]: 2026-02-18 14:59:43.761 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 14:59:43 compute-0 nova_compute[189016]: 2026-02-18 14:59:43.761 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 14:59:43 compute-0 nova_compute[189016]: 2026-02-18 14:59:43.761 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.312 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.353 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.354 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.354 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.355 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.355 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.356 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.356 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.357 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:45 compute-0 nova_compute[189016]: 2026-02-18 14:59:45.357 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 18 14:59:45 compute-0 podman[245036]: 2026-02-18 14:59:45.763373015 +0000 UTC m=+0.074091183 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 14:59:45 compute-0 podman[245035]: 2026-02-18 14:59:45.780183089 +0000 UTC m=+0.090899137 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.065 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.138 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.139 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.140 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.140 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.335 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.426 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.428 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.503 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.504 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.571 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.573 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.633 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.644 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.707 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.708 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.765 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.766 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.837 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.839 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.897 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.906 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.972 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:46 compute-0 nova_compute[189016]: 2026-02-18 14:59:46.973 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.033 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.035 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.052 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.089 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.090 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.150 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.497 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.498 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4837MB free_disk=72.20431137084961GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.499 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.499 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.637 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.637 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.637 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.637 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.638 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 14:59:47 compute-0 podman[245115]: 2026-02-18 14:59:47.777731648 +0000 UTC m=+0.101102613 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.816 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.890 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.892 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 14:59:47 compute-0 nova_compute[189016]: 2026-02-18 14:59:47.892 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 14:59:48 compute-0 nova_compute[189016]: 2026-02-18 14:59:48.089 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:48 compute-0 nova_compute[189016]: 2026-02-18 14:59:48.878 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:48 compute-0 nova_compute[189016]: 2026-02-18 14:59:48.879 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:49 compute-0 nova_compute[189016]: 2026-02-18 14:59:49.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:49 compute-0 nova_compute[189016]: 2026-02-18 14:59:49.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 18 14:59:49 compute-0 nova_compute[189016]: 2026-02-18 14:59:49.097 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 18 14:59:51 compute-0 nova_compute[189016]: 2026-02-18 14:59:51.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 14:59:52 compute-0 nova_compute[189016]: 2026-02-18 14:59:52.056 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:53 compute-0 nova_compute[189016]: 2026-02-18 14:59:53.092 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:57 compute-0 nova_compute[189016]: 2026-02-18 14:59:57.061 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:58 compute-0 nova_compute[189016]: 2026-02-18 14:59:58.095 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 14:59:59 compute-0 podman[204930]: time="2026-02-18T14:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 14:59:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 14:59:59 compute-0 podman[204930]: @ - - [18/Feb/2026:14:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 18 15:00:00 compute-0 podman[245144]: 2026-02-18 15:00:00.743350253 +0000 UTC m=+0.067862043 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:00:00 compute-0 podman[245145]: 2026-02-18 15:00:00.749261055 +0000 UTC m=+0.071477980 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., version=9.7, architecture=x86_64, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 18 15:00:01 compute-0 openstack_network_exporter[208107]: ERROR   15:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:00:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:00:01 compute-0 openstack_network_exporter[208107]: ERROR   15:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:00:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:00:02 compute-0 nova_compute[189016]: 2026-02-18 15:00:02.064 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:03 compute-0 nova_compute[189016]: 2026-02-18 15:00:03.098 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:06 compute-0 podman[245186]: 2026-02-18 15:00:06.733022707 +0000 UTC m=+0.063151590 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 15:00:07 compute-0 nova_compute[189016]: 2026-02-18 15:00:07.068 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:07 compute-0 podman[245206]: 2026-02-18 15:00:07.764421456 +0000 UTC m=+0.086652535 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, config_id=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, maintainer=Red Hat, Inc.)
Feb 18 15:00:07 compute-0 podman[245205]: 2026-02-18 15:00:07.772676105 +0000 UTC m=+0.095173040 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Feb 18 15:00:08 compute-0 nova_compute[189016]: 2026-02-18 15:00:08.101 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:12 compute-0 nova_compute[189016]: 2026-02-18 15:00:12.070 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:13 compute-0 nova_compute[189016]: 2026-02-18 15:00:13.104 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:16 compute-0 podman[245244]: 2026-02-18 15:00:16.764941174 +0000 UTC m=+0.081037810 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 18 15:00:16 compute-0 podman[245245]: 2026-02-18 15:00:16.783429499 +0000 UTC m=+0.099151076 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 15:00:17 compute-0 nova_compute[189016]: 2026-02-18 15:00:17.080 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:18 compute-0 nova_compute[189016]: 2026-02-18 15:00:18.111 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:18 compute-0 podman[245288]: 2026-02-18 15:00:18.823476601 +0000 UTC m=+0.143299128 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:00:21 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:21.168 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:00:21 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:21.175 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:00:21 compute-0 nova_compute[189016]: 2026-02-18 15:00:21.179 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:22 compute-0 nova_compute[189016]: 2026-02-18 15:00:22.084 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:23 compute-0 nova_compute[189016]: 2026-02-18 15:00:23.116 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.468 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.469 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.497 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.590 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.590 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.602 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.602 189020 INFO nova.compute.claims [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.791 189020 DEBUG nova.compute.provider_tree [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.809 189020 DEBUG nova.scheduler.client.report [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.829 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.830 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.872 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.872 189020 DEBUG nova.network.neutron [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 15:00:26 compute-0 nova_compute[189016]: 2026-02-18 15:00:26.921 189020 INFO nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.044 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.087 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.164 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.166 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.167 189020 INFO nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Creating image(s)#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.168 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.168 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.170 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:27.184 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.195 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.276 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.278 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "8e446fd4a49ba04578b223406ce2c408026401e6" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.279 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.295 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.344 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.345 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.395 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6,backing_fmt=raw /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.396 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "8e446fd4a49ba04578b223406ce2c408026401e6" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.397 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.449 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8e446fd4a49ba04578b223406ce2c408026401e6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.450 189020 DEBUG nova.virt.disk.api [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Checking if we can resize image /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.451 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.504 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.505 189020 DEBUG nova.virt.disk.api [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Cannot resize image /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.505 189020 DEBUG nova.objects.instance [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d11e815-9fde-4624-9556-a726f1b266ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.523 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.524 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.525 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.539 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.601 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.602 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.603 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.617 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.668 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.669 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.702 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 1073741824" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.703 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.704 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.774 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.775 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.775 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Ensure instance console log exists: /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.776 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.776 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:27 compute-0 nova_compute[189016]: 2026-02-18 15:00:27.777 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:28 compute-0 nova_compute[189016]: 2026-02-18 15:00:28.120 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:29 compute-0 podman[204930]: time="2026-02-18T15:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:00:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:00:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Feb 18 15:00:30 compute-0 nova_compute[189016]: 2026-02-18 15:00:30.813 189020 DEBUG nova.network.neutron [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Successfully updated port: fa58a88f-dd18-4a95-98c8-21e845485f69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 15:00:30 compute-0 nova_compute[189016]: 2026-02-18 15:00:30.835 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:00:30 compute-0 nova_compute[189016]: 2026-02-18 15:00:30.835 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquired lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:00:30 compute-0 nova_compute[189016]: 2026-02-18 15:00:30.836 189020 DEBUG nova.network.neutron [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:00:30 compute-0 nova_compute[189016]: 2026-02-18 15:00:30.922 189020 DEBUG nova.compute.manager [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received event network-changed-fa58a88f-dd18-4a95-98c8-21e845485f69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:00:30 compute-0 nova_compute[189016]: 2026-02-18 15:00:30.923 189020 DEBUG nova.compute.manager [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Refreshing instance network info cache due to event network-changed-fa58a88f-dd18-4a95-98c8-21e845485f69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:00:30 compute-0 nova_compute[189016]: 2026-02-18 15:00:30.923 189020 DEBUG oslo_concurrency.lockutils [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:00:31 compute-0 openstack_network_exporter[208107]: ERROR   15:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:00:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:00:31 compute-0 openstack_network_exporter[208107]: ERROR   15:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:00:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:00:31 compute-0 podman[245343]: 2026-02-18 15:00:31.755858111 +0000 UTC m=+0.073112904 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:00:31 compute-0 podman[245344]: 2026-02-18 15:00:31.778708233 +0000 UTC m=+0.092973730 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, name=ubi9/ubi-minimal, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.7, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., config_id=openstack_network_exporter)
Feb 18 15:00:31 compute-0 nova_compute[189016]: 2026-02-18 15:00:31.802 189020 DEBUG nova.network.neutron [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:00:32 compute-0 nova_compute[189016]: 2026-02-18 15:00:32.089 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:33 compute-0 nova_compute[189016]: 2026-02-18 15:00:33.123 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.119 189020 DEBUG nova.network.neutron [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updating instance_info_cache with network_info: [{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.161 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Releasing lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.161 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Instance network_info: |[{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.162 189020 DEBUG oslo_concurrency.lockutils [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.162 189020 DEBUG nova.network.neutron [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Refreshing network info cache for port fa58a88f-dd18-4a95-98c8-21e845485f69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.165 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Start _get_guest_xml network_info=[{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}], 'ephemerals': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 1, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.172 189020 WARNING nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.178 189020 DEBUG nova.virt.libvirt.host [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.179 189020 DEBUG nova.virt.libvirt.host [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.184 189020 DEBUG nova.virt.libvirt.host [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.185 189020 DEBUG nova.virt.libvirt.host [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.186 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.186 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T14:51:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='23e98520-0527-4596-8420-5ff1feeb3155',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T14:51:30Z,direct_url=<?>,disk_format='qcow2',id=7cc2a96a-1e6c-474d-b671-0e2626bf4158,min_disk=0,min_ram=0,name='cirros',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T14:51:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.186 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.187 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.187 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.187 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.188 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.188 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.188 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.189 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.189 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.190 189020 DEBUG nova.virt.hardware [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.195 189020 DEBUG nova.virt.libvirt.vif [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:00:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss',id=4,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-lwioh85z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:00:27Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTY1NDMxMzI4MzgzMzI4MzA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Feb 18 15:00:34 compute-0 nova_compute[189016]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTY1NDMxMzI4MzgzMzI4MzA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=5d11e815-9fde-4624-9556-a726f1b266ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.196 189020 DEBUG nova.network.os_vif_util [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.197 189020 DEBUG nova.network.os_vif_util [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:22:e0,bridge_name='br-int',has_traffic_filtering=True,id=fa58a88f-dd18-4a95-98c8-21e845485f69,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfa58a88f-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.198 189020 DEBUG nova.objects.instance [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d11e815-9fde-4624-9556-a726f1b266ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.224 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <uuid>5d11e815-9fde-4624-9556-a726f1b266ba</uuid>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <name>instance-00000004</name>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <memory>524288</memory>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <nova:name>vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss</nova:name>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:00:34</nova:creationTime>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <nova:flavor name="m1.small">
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:memory>512</nova:memory>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:ephemeral>1</nova:ephemeral>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:user uuid="387d978e2b494e88ad13abae2a83321d">admin</nova:user>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:project uuid="71c6c5d63b07447388ace322f081ffc3">admin</nova:project>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="7cc2a96a-1e6c-474d-b671-0e2626bf4158"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        <nova:port uuid="fa58a88f-dd18-4a95-98c8-21e845485f69">
Feb 18 15:00:34 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="192.168.0.174" ipVersion="4"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <system>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <entry name="serial">5d11e815-9fde-4624-9556-a726f1b266ba</entry>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <entry name="uuid">5d11e815-9fde-4624-9556-a726f1b266ba</entry>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </system>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <os>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  </os>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <features>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  </features>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <target dev="vdb" bus="virtio"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.config"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:74:22:e0"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <target dev="tapfa58a88f-dd"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </interface>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/console.log" append="off"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <video>
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </video>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:00:34 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:00:34 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:00:34 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:00:34 compute-0 nova_compute[189016]: </domain>
Feb 18 15:00:34 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.225 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Preparing to wait for external event network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.225 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.226 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.226 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.227 189020 DEBUG nova.virt.libvirt.vif [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:00:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss',id=4,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-lwioh85z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:00:27Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTY1NDMxMzI4MzgzMzI4MzA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Feb 18 15:00:34 compute-0 nova_compute[189016]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTY1NDMxMzI4MzgzMzI4MzA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=5d11e815-9fde-4624-9556-a726f1b266ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.227 189020 DEBUG nova.network.os_vif_util [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.228 189020 DEBUG nova.network.os_vif_util [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:22:e0,bridge_name='br-int',has_traffic_filtering=True,id=fa58a88f-dd18-4a95-98c8-21e845485f69,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfa58a88f-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.229 189020 DEBUG os_vif [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:22:e0,bridge_name='br-int',has_traffic_filtering=True,id=fa58a88f-dd18-4a95-98c8-21e845485f69,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfa58a88f-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.229 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.230 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.230 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.237 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.238 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa58a88f-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.238 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa58a88f-dd, col_values=(('external_ids', {'iface-id': 'fa58a88f-dd18-4a95-98c8-21e845485f69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:22:e0', 'vm-uuid': '5d11e815-9fde-4624-9556-a726f1b266ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.241 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:34 compute-0 NetworkManager[57258]: <info>  [1771426834.2434] manager: (tapfa58a88f-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.244 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.250 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.253 189020 INFO os_vif [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:22:e0,bridge_name='br-int',has_traffic_filtering=True,id=fa58a88f-dd18-4a95-98c8-21e845485f69,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfa58a88f-dd')#033[00m
Feb 18 15:00:34 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 15:00:34.195 189020 DEBUG nova.virt.libvirt.vif [None req-68a84d39-60 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.317 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.317 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.317 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.318 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No VIF found with MAC fa:16:3e:74:22:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.318 189020 INFO nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Using config drive#033[00m
Feb 18 15:00:34 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 15:00:34.227 189020 DEBUG nova.virt.libvirt.vif [None req-68a84d39-60 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.935 189020 INFO nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Creating config drive at /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.config#033[00m
Feb 18 15:00:34 compute-0 nova_compute[189016]: 2026-02-18 15:00:34.943 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqia02d35 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.068 189020 DEBUG oslo_concurrency.processutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqia02d35" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:35 compute-0 kernel: tapfa58a88f-dd: entered promiscuous mode
Feb 18 15:00:35 compute-0 NetworkManager[57258]: <info>  [1771426835.1332] manager: (tapfa58a88f-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.135 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:35 compute-0 ovn_controller[99062]: 2026-02-18T15:00:35Z|00045|binding|INFO|Claiming lport fa58a88f-dd18-4a95-98c8-21e845485f69 for this chassis.
Feb 18 15:00:35 compute-0 ovn_controller[99062]: 2026-02-18T15:00:35Z|00046|binding|INFO|fa58a88f-dd18-4a95-98c8-21e845485f69: Claiming fa:16:3e:74:22:e0 192.168.0.174
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.137 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:35 compute-0 ovn_controller[99062]: 2026-02-18T15:00:35Z|00047|binding|INFO|Setting lport fa58a88f-dd18-4a95-98c8-21e845485f69 ovn-installed in OVS
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.145 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.157 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:22:e0 192.168.0.174'], port_security=['fa:16:3e:74:22:e0 192.168.0.174'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-g63ccmz2bh6c-dqrgqnefazks-iswfzet66xcu-port-kgtkkgtpiadt', 'neutron:cidrs': '192.168.0.174/24', 'neutron:device_id': '5d11e815-9fde-4624-9556-a726f1b266ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-g63ccmz2bh6c-dqrgqnefazks-iswfzet66xcu-port-kgtkkgtpiadt', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=fa58a88f-dd18-4a95-98c8-21e845485f69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:00:35 compute-0 ovn_controller[99062]: 2026-02-18T15:00:35Z|00048|binding|INFO|Setting lport fa58a88f-dd18-4a95-98c8-21e845485f69 up in Southbound
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.159 108400 INFO neutron.agent.ovn.metadata.agent [-] Port fa58a88f-dd18-4a95-98c8-21e845485f69 in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 bound to our chassis#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.161 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c269c00a-f738-4cb6-ac67-09050c56f9f2#033[00m
Feb 18 15:00:35 compute-0 systemd-udevd[245408]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:00:35 compute-0 systemd-machined[158361]: New machine qemu-4-instance-00000004.
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.180 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b5112013-c69d-41db-89ff-76bd47c30d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:00:35 compute-0 NetworkManager[57258]: <info>  [1771426835.1899] device (tapfa58a88f-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 15:00:35 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb 18 15:00:35 compute-0 NetworkManager[57258]: <info>  [1771426835.1971] device (tapfa58a88f-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.211 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[90d1add8-1efd-4317-9861-b2f3cd60f373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.215 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[2b74d2f6-f04c-4023-9862-4cd8023997e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.235 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[f9534d0b-8a3b-45b4-bc67-71355daf4d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.253 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7877d5-8751-4a72-ba8e-34226cf789d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 36950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245418, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.268 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[888ae558-8e79-4e43-b861-c6b801867477]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346658, 'tstamp': 346658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245421, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346660, 'tstamp': 346660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245421, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.270 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.274 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.274 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc269c00a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.274 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.275 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc269c00a-f0, col_values=(('external_ids', {'iface-id': '7e592dc1-2432-46dc-b338-f9a04aad5932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:00:35 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:35.275 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.524 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426835.5233896, 5d11e815-9fde-4624-9556-a726f1b266ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.525 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] VM Started (Lifecycle Event)#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.544 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.551 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426835.5235944, 5d11e815-9fde-4624-9556-a726f1b266ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.552 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] VM Paused (Lifecycle Event)#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.569 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.576 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.593 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.986 189020 DEBUG nova.compute.manager [req-ace3c357-7605-450b-9f83-5cb6531729e5 req-7382108e-9d41-428e-bb40-b58f1da9ee37 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received event network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.987 189020 DEBUG oslo_concurrency.lockutils [req-ace3c357-7605-450b-9f83-5cb6531729e5 req-7382108e-9d41-428e-bb40-b58f1da9ee37 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.987 189020 DEBUG oslo_concurrency.lockutils [req-ace3c357-7605-450b-9f83-5cb6531729e5 req-7382108e-9d41-428e-bb40-b58f1da9ee37 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.987 189020 DEBUG oslo_concurrency.lockutils [req-ace3c357-7605-450b-9f83-5cb6531729e5 req-7382108e-9d41-428e-bb40-b58f1da9ee37 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.987 189020 DEBUG nova.compute.manager [req-ace3c357-7605-450b-9f83-5cb6531729e5 req-7382108e-9d41-428e-bb40-b58f1da9ee37 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Processing event network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.988 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.992 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771426835.9918828, 5d11e815-9fde-4624-9556-a726f1b266ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.992 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:00:35 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.995 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:35.999 189020 INFO nova.virt.libvirt.driver [-] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Instance spawned successfully.#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.000 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.020 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.032 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.032 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.033 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.034 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.035 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.035 189020 DEBUG nova.virt.libvirt.driver [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.039 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.081 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.122 189020 INFO nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Took 8.96 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.123 189020 DEBUG nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.207 189020 INFO nova.compute.manager [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Took 9.65 seconds to build instance.#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.227 189020 DEBUG oslo_concurrency.lockutils [None req-68a84d39-6098-45cf-925e-4d2ee35eef84 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.838 189020 DEBUG nova.network.neutron [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updated VIF entry in instance network info cache for port fa58a88f-dd18-4a95-98c8-21e845485f69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.838 189020 DEBUG nova.network.neutron [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updating instance_info_cache with network_info: [{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:00:36 compute-0 nova_compute[189016]: 2026-02-18 15:00:36.855 189020 DEBUG oslo_concurrency.lockutils [req-3930579d-dd27-4478-bfed-1f0a6e4b38cb req-32307de7-c613-46aa-be05-09ea18e61b60 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:00:37 compute-0 nova_compute[189016]: 2026-02-18 15:00:37.092 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:37 compute-0 podman[245430]: 2026-02-18 15:00:37.772436875 +0000 UTC m=+0.086356111 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Feb 18 15:00:37 compute-0 podman[245448]: 2026-02-18 15:00:37.885393572 +0000 UTC m=+0.084728979 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30)
Feb 18 15:00:37 compute-0 podman[245449]: 2026-02-18 15:00:37.900859066 +0000 UTC m=+0.094128558 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:00:38 compute-0 nova_compute[189016]: 2026-02-18 15:00:38.063 189020 DEBUG nova.compute.manager [req-adc3a5c9-7303-4184-bf27-e2dca209d4ec req-d57fcc54-17b3-4d24-b2f3-38410fc1ea58 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received event network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:00:38 compute-0 nova_compute[189016]: 2026-02-18 15:00:38.064 189020 DEBUG oslo_concurrency.lockutils [req-adc3a5c9-7303-4184-bf27-e2dca209d4ec req-d57fcc54-17b3-4d24-b2f3-38410fc1ea58 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:38 compute-0 nova_compute[189016]: 2026-02-18 15:00:38.064 189020 DEBUG oslo_concurrency.lockutils [req-adc3a5c9-7303-4184-bf27-e2dca209d4ec req-d57fcc54-17b3-4d24-b2f3-38410fc1ea58 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:38 compute-0 nova_compute[189016]: 2026-02-18 15:00:38.066 189020 DEBUG oslo_concurrency.lockutils [req-adc3a5c9-7303-4184-bf27-e2dca209d4ec req-d57fcc54-17b3-4d24-b2f3-38410fc1ea58 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:38 compute-0 nova_compute[189016]: 2026-02-18 15:00:38.066 189020 DEBUG nova.compute.manager [req-adc3a5c9-7303-4184-bf27-e2dca209d4ec req-d57fcc54-17b3-4d24-b2f3-38410fc1ea58 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] No waiting events found dispatching network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:00:38 compute-0 nova_compute[189016]: 2026-02-18 15:00:38.066 189020 WARNING nova.compute.manager [req-adc3a5c9-7303-4184-bf27-e2dca209d4ec req-d57fcc54-17b3-4d24-b2f3-38410fc1ea58 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received unexpected event network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:00:39 compute-0 nova_compute[189016]: 2026-02-18 15:00:39.243 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:41.434 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:41.435 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:00:41.436 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:42 compute-0 nova_compute[189016]: 2026-02-18 15:00:42.096 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:42 compute-0 nova_compute[189016]: 2026-02-18 15:00:42.120 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:43 compute-0 nova_compute[189016]: 2026-02-18 15:00:43.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:43 compute-0 nova_compute[189016]: 2026-02-18 15:00:43.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:00:43 compute-0 nova_compute[189016]: 2026-02-18 15:00:43.209 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:00:43 compute-0 nova_compute[189016]: 2026-02-18 15:00:43.210 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:00:43 compute-0 nova_compute[189016]: 2026-02-18 15:00:43.210 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:00:44 compute-0 nova_compute[189016]: 2026-02-18 15:00:44.247 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.098 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.383 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updating instance_info_cache with network_info: [{"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.404 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.405 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.406 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.406 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.407 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.407 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.408 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.408 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.409 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.433 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.435 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.435 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.435 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.573 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:47 compute-0 podman[245489]: 2026-02-18 15:00:47.57854117 +0000 UTC m=+0.086653192 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 18 15:00:47 compute-0 podman[245490]: 2026-02-18 15:00:47.59355518 +0000 UTC m=+0.095357943 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.626 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.628 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.684 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.686 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.741 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.742 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.832 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.842 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.899 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.901 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.949 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:47 compute-0 nova_compute[189016]: 2026-02-18 15:00:47.955 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.019 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.020 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.092 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.099 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.163 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.165 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.228 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.234 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.298 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.302 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.359 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.370 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.434 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.436 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.502 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.508 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.567 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.569 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:00:48 compute-0 nova_compute[189016]: 2026-02-18 15:00:48.628 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.014 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.016 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4682MB free_disk=72.20331573486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.017 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.018 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.143 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.145 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.145 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.145 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.146 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.146 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.243 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.251 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.283 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.348 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:00:49 compute-0 nova_compute[189016]: 2026-02-18 15:00:49.350 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:00:49 compute-0 podman[245580]: 2026-02-18 15:00:49.778629247 +0000 UTC m=+0.105337536 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 15:00:50 compute-0 nova_compute[189016]: 2026-02-18 15:00:50.995 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:51 compute-0 nova_compute[189016]: 2026-02-18 15:00:51.070 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:00:52 compute-0 nova_compute[189016]: 2026-02-18 15:00:52.101 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:54 compute-0 nova_compute[189016]: 2026-02-18 15:00:54.255 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:57 compute-0 nova_compute[189016]: 2026-02-18 15:00:57.104 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:59 compute-0 nova_compute[189016]: 2026-02-18 15:00:59.261 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:00:59 compute-0 podman[204930]: time="2026-02-18T15:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:00:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:00:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Feb 18 15:01:01 compute-0 openstack_network_exporter[208107]: ERROR   15:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:01:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:01:01 compute-0 openstack_network_exporter[208107]: ERROR   15:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:01:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:01:02 compute-0 nova_compute[189016]: 2026-02-18 15:01:02.107 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:02 compute-0 podman[245619]: 2026-02-18 15:01:02.767426052 +0000 UTC m=+0.077671145 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:01:02 compute-0 podman[245620]: 2026-02-18 15:01:02.781124419 +0000 UTC m=+0.090856909 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, architecture=x86_64, vcs-type=git, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Feb 18 15:01:04 compute-0 nova_compute[189016]: 2026-02-18 15:01:04.266 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:05 compute-0 ovn_controller[99062]: 2026-02-18T15:01:05Z|00049|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Feb 18 15:01:07 compute-0 nova_compute[189016]: 2026-02-18 15:01:07.109 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:08 compute-0 podman[245663]: 2026-02-18 15:01:08.748744879 +0000 UTC m=+0.075079150 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Feb 18 15:01:08 compute-0 podman[245664]: 2026-02-18 15:01:08.759560892 +0000 UTC m=+0.081963454 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, release=1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Feb 18 15:01:08 compute-0 podman[245662]: 2026-02-18 15:01:08.77527433 +0000 UTC m=+0.106061024 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:01:09 compute-0 nova_compute[189016]: 2026-02-18 15:01:09.271 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:10 compute-0 ovn_controller[99062]: 2026-02-18T15:01:10Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:22:e0 192.168.0.174
Feb 18 15:01:10 compute-0 ovn_controller[99062]: 2026-02-18T15:01:10Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:22:e0 192.168.0.174
Feb 18 15:01:12 compute-0 nova_compute[189016]: 2026-02-18 15:01:12.114 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:14 compute-0 nova_compute[189016]: 2026-02-18 15:01:14.276 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:17 compute-0 nova_compute[189016]: 2026-02-18 15:01:17.119 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:17 compute-0 podman[245735]: 2026-02-18 15:01:17.734459915 +0000 UTC m=+0.061140847 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:01:17 compute-0 podman[245734]: 2026-02-18 15:01:17.759829207 +0000 UTC m=+0.089170887 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 18 15:01:19 compute-0 nova_compute[189016]: 2026-02-18 15:01:19.279 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:20 compute-0 podman[245778]: 2026-02-18 15:01:20.773173103 +0000 UTC m=+0.096801439 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 18 15:01:22 compute-0 nova_compute[189016]: 2026-02-18 15:01:22.119 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:24 compute-0 nova_compute[189016]: 2026-02-18 15:01:24.283 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.193 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.194 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.194 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.196 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78def175f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.203 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9a9ee96c-8146-46a1-a098-5d021fb5e779', 'name': 'vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.207 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c469573f-54e2-4c7f-9223-77500b7b9ea2', 'name': 'vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.210 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.212 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 5d11e815-9fde-4624-9556-a726f1b266ba from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 18 15:01:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:25.214 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/5d11e815-9fde-4624-9556-a726f1b266ba -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fc9f399571f1151dad02e1ad7f2b10f5a5ac66aa5da5d4c981c78739a1fdba51" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.071 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 18 Feb 2026 15:01:25 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-bae74371-0d78-47f4-a8e6-69d2cba23ffd x-openstack-request-id: req-bae74371-0d78-47f4-a8e6-69d2cba23ffd _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.072 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "5d11e815-9fde-4624-9556-a726f1b266ba", "name": "vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss", "status": "ACTIVE", "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "user_id": "387d978e2b494e88ad13abae2a83321d", "metadata": {"metering.server_group": "449d1667-0173-4809-b0e3-b50e27381afa"}, "hostId": "446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd", "image": {"id": "7cc2a96a-1e6c-474d-b671-0e2626bf4158", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/7cc2a96a-1e6c-474d-b671-0e2626bf4158"}]}, "flavor": {"id": "23e98520-0527-4596-8420-5ff1feeb3155", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/23e98520-0527-4596-8420-5ff1feeb3155"}]}, "created": "2026-02-18T15:00:25Z", "updated": "2026-02-18T15:00:36Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.174", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:74:22:e0"}, {"version": 4, "addr": "192.168.122.201", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:74:22:e0"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/5d11e815-9fde-4624-9556-a726f1b266ba"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/5d11e815-9fde-4624-9556-a726f1b266ba"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-02-18T15:00:36.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.072 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/5d11e815-9fde-4624-9556-a726f1b266ba used request id req-bae74371-0d78-47f4-a8e6-69d2cba23ffd request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.073 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d11e815-9fde-4624-9556-a726f1b266ba', 'name': 'vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.074 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.074 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.074 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.075 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T15:01:26.074994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.133 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 817918800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.134 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 168311221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.134 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 348237395 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.195 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 585395980 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.196 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 111231743 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.196 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 75467101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.253 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.254 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.254 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.314 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 693939868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.315 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 100498264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.315 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 81129348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.316 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.316 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.316 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.317 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.317 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.317 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.317 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T15:01:26.317487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.318 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.318 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.318 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.319 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.319 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.319 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.320 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.320 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.320 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.321 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.321 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.322 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.322 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.322 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.322 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.323 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.323 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.323 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T15:01:26.323295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.323 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.324 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.324 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.325 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.325 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.326 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.326 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.326 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.327 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.327 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.327 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.328 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.328 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.329 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.329 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.329 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.329 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T15:01:26.329695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.353 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.353 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.354 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.373 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 21897216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.374 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.374 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.392 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.393 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.393 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.413 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.414 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.414 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.414 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.415 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.415 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.415 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.415 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.415 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.415 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.415 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.416 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T15:01:26.415505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.416 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.416 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.416 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.416 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.417 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.417 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.417 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.417 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.418 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.418 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.419 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.419 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.419 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.419 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.419 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.419 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T15:01:26.419571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.420 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.420 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.421 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.421 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.421 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.422 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.422 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.422 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.422 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.422 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.423 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.423 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.423 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.423 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.423 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.424 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.424 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T15:01:26.424162) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.429 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.433 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.436 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.439 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 5d11e815-9fde-4624-9556-a726f1b266ba / tapfa58a88f-dd inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.439 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.440 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.440 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.440 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.440 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.440 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.440 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T15:01:26.440555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.460 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/cpu volume: 354620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.476 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/cpu volume: 33350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.494 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 37210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.513 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/cpu volume: 33170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.514 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.514 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.514 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.514 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.514 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.515 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.515 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 2190357319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.515 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 22293021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.515 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.515 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 2123501315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.516 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 9395246 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.516 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T15:01:26.515068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.516 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.516 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.516 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.517 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 42314170381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.517 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 18712803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.517 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.517 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets volume: 63 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.518 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.519 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.519 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.519 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T15:01:26.518383) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.519 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.519 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.519 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.520 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.520 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T15:01:26.520062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.520 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.520 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.520 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.521 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.522 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T15:01:26.521519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.522 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.522 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.522 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.523 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.523 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.523 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.523 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.524 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.524 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.524 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.524 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.524 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.525 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.525 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.525 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.525 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T15:01:26.525164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.525 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.525 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.526 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.526 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.526 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.526 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.526 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.527 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.527 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.527 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.527 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T15:01:26.527166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T15:01:26.528281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.528 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.529 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes volume: 7370 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.530 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.530 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.530 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.530 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T15:01:26.529810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.531 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.531 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.531 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.531 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.531 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T15:01:26.531418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.532 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes.delta volume: 2408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.533 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes.delta volume: 380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T15:01:26.532669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.533 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.533 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.533 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss>]
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-18T15:01:26.534333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.534 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T15:01:26.535279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.535 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.536 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/memory.usage volume: 48.94140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.537 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T15:01:26.536836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.537 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.537 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.537 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/memory.usage volume: 49.66796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.537 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.537 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.538 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.538 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.538 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.538 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.538 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.538 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.538 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.539 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T15:01:26.538263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.539 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.539 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.539 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.540 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.540 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.540 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.540 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.540 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.541 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.541 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.541 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.541 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.542 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.542 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.542 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.542 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.542 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.542 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.543 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.543 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T15:01:26.542113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.543 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.543 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.543 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.544 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.544 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.544 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.544 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.544 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T15:01:26.544016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.545 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T15:01:26.545628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.546 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.546 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.546 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.546 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.547 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.547 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.547 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.547 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.547 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.547 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-18T15:01:26.547312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.547 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss>]
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.548 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.549 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:01:26.550 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:01:27 compute-0 nova_compute[189016]: 2026-02-18 15:01:27.123 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:29 compute-0 nova_compute[189016]: 2026-02-18 15:01:29.287 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:29 compute-0 podman[204930]: time="2026-02-18T15:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:01:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:01:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Feb 18 15:01:31 compute-0 openstack_network_exporter[208107]: ERROR   15:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:01:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:01:31 compute-0 openstack_network_exporter[208107]: ERROR   15:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:01:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:01:32 compute-0 nova_compute[189016]: 2026-02-18 15:01:32.126 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:33 compute-0 podman[245806]: 2026-02-18 15:01:33.752080212 +0000 UTC m=+0.063031655 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:01:33 compute-0 podman[245807]: 2026-02-18 15:01:33.756463433 +0000 UTC m=+0.068368200 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, name=ubi9/ubi-minimal, release=1770267347, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, version=9.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Feb 18 15:01:34 compute-0 nova_compute[189016]: 2026-02-18 15:01:34.291 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:37 compute-0 nova_compute[189016]: 2026-02-18 15:01:37.127 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:39 compute-0 nova_compute[189016]: 2026-02-18 15:01:39.295 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:39 compute-0 podman[245853]: 2026-02-18 15:01:39.75947059 +0000 UTC m=+0.076721672 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 18 15:01:39 compute-0 podman[245852]: 2026-02-18 15:01:39.766315813 +0000 UTC m=+0.086980631 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 18 15:01:39 compute-0 podman[245851]: 2026-02-18 15:01:39.776464769 +0000 UTC m=+0.098743688 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 18 15:01:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:01:41.437 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:01:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:01:41.442 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:01:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:01:41.444 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:01:42 compute-0 nova_compute[189016]: 2026-02-18 15:01:42.131 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:43 compute-0 nova_compute[189016]: 2026-02-18 15:01:43.121 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.301 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.866 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.867 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.867 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:01:44 compute-0 nova_compute[189016]: 2026-02-18 15:01:44.868 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:01:46 compute-0 nova_compute[189016]: 2026-02-18 15:01:46.890 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:01:46 compute-0 nova_compute[189016]: 2026-02-18 15:01:46.904 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:01:46 compute-0 nova_compute[189016]: 2026-02-18 15:01:46.904 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:01:46 compute-0 nova_compute[189016]: 2026-02-18 15:01:46.905 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:46 compute-0 nova_compute[189016]: 2026-02-18 15:01:46.905 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:46 compute-0 nova_compute[189016]: 2026-02-18 15:01:46.905 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:46 compute-0 nova_compute[189016]: 2026-02-18 15:01:46.906 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.049 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.073 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.074 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.074 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.074 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.132 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.183 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.273 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.274 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.329 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.331 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.389 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.390 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.447 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.455 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.507 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.509 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.557 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.558 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.607 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.608 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.662 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.669 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.717 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.719 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.776 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.778 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.836 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.837 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.888 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.895 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.944 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:47 compute-0 nova_compute[189016]: 2026-02-18 15:01:47.945 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.006 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.008 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.057 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.058 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.106 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.437 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.439 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4563MB free_disk=72.18173217773438GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.439 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.440 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.531 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.531 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.531 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.531 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.532 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.532 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.623 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.642 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.644 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:01:48 compute-0 nova_compute[189016]: 2026-02-18 15:01:48.644 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:01:48 compute-0 podman[245952]: 2026-02-18 15:01:48.738390665 +0000 UTC m=+0.067446147 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:01:48 compute-0 podman[245951]: 2026-02-18 15:01:48.761838588 +0000 UTC m=+0.091585027 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 18 15:01:49 compute-0 nova_compute[189016]: 2026-02-18 15:01:49.304 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:49 compute-0 nova_compute[189016]: 2026-02-18 15:01:49.646 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:51 compute-0 nova_compute[189016]: 2026-02-18 15:01:51.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:01:51 compute-0 podman[245996]: 2026-02-18 15:01:51.773686327 +0000 UTC m=+0.091991808 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:01:52 compute-0 nova_compute[189016]: 2026-02-18 15:01:52.136 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:54 compute-0 nova_compute[189016]: 2026-02-18 15:01:54.308 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:57 compute-0 nova_compute[189016]: 2026-02-18 15:01:57.138 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:59 compute-0 nova_compute[189016]: 2026-02-18 15:01:59.311 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:01:59 compute-0 podman[204930]: time="2026-02-18T15:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:01:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:01:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Feb 18 15:02:01 compute-0 openstack_network_exporter[208107]: ERROR   15:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:02:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:02:01 compute-0 openstack_network_exporter[208107]: ERROR   15:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:02:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:02:02 compute-0 nova_compute[189016]: 2026-02-18 15:02:02.140 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:04 compute-0 nova_compute[189016]: 2026-02-18 15:02:04.316 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:04 compute-0 podman[246023]: 2026-02-18 15:02:04.771836632 +0000 UTC m=+0.088898290 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:02:04 compute-0 podman[246024]: 2026-02-18 15:02:04.774704844 +0000 UTC m=+0.084263152 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, name=ubi9/ubi-minimal, version=9.7, architecture=x86_64, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc.)
Feb 18 15:02:07 compute-0 nova_compute[189016]: 2026-02-18 15:02:07.141 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:09 compute-0 nova_compute[189016]: 2026-02-18 15:02:09.319 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:10 compute-0 podman[246065]: 2026-02-18 15:02:10.743735185 +0000 UTC m=+0.070144825 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 18 15:02:10 compute-0 podman[246066]: 2026-02-18 15:02:10.752895237 +0000 UTC m=+0.069198191 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:02:10 compute-0 podman[246067]: 2026-02-18 15:02:10.787924453 +0000 UTC m=+0.108225528 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 15:02:12 compute-0 nova_compute[189016]: 2026-02-18 15:02:12.143 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:14 compute-0 nova_compute[189016]: 2026-02-18 15:02:14.324 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:17 compute-0 nova_compute[189016]: 2026-02-18 15:02:17.145 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:19 compute-0 nova_compute[189016]: 2026-02-18 15:02:19.327 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:19 compute-0 podman[246129]: 2026-02-18 15:02:19.777674746 +0000 UTC m=+0.098973554 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 15:02:19 compute-0 podman[246128]: 2026-02-18 15:02:19.799732564 +0000 UTC m=+0.119795081 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 18 15:02:22 compute-0 nova_compute[189016]: 2026-02-18 15:02:22.147 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:22 compute-0 podman[246170]: 2026-02-18 15:02:22.772596327 +0000 UTC m=+0.091340982 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 18 15:02:24 compute-0 nova_compute[189016]: 2026-02-18 15:02:24.331 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:27 compute-0 nova_compute[189016]: 2026-02-18 15:02:27.151 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:29 compute-0 nova_compute[189016]: 2026-02-18 15:02:29.335 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:29 compute-0 podman[204930]: time="2026-02-18T15:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:02:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:02:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 18 15:02:31 compute-0 openstack_network_exporter[208107]: ERROR   15:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:02:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:02:31 compute-0 openstack_network_exporter[208107]: ERROR   15:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:02:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:02:32 compute-0 nova_compute[189016]: 2026-02-18 15:02:32.155 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:34 compute-0 nova_compute[189016]: 2026-02-18 15:02:34.339 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:35 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 18 15:02:35 compute-0 podman[246197]: 2026-02-18 15:02:35.782611591 +0000 UTC m=+0.089551775 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, release=1770267347, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z)
Feb 18 15:02:35 compute-0 podman[246196]: 2026-02-18 15:02:35.783881623 +0000 UTC m=+0.089246947 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:02:37 compute-0 nova_compute[189016]: 2026-02-18 15:02:37.157 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:39 compute-0 nova_compute[189016]: 2026-02-18 15:02:39.343 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:02:41.439 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:02:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:02:41.444 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:02:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:02:41.447 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:02:41 compute-0 podman[246241]: 2026-02-18 15:02:41.763353686 +0000 UTC m=+0.082614871 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 18 15:02:41 compute-0 podman[246242]: 2026-02-18 15:02:41.769003678 +0000 UTC m=+0.090296685 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 18 15:02:41 compute-0 podman[246243]: 2026-02-18 15:02:41.801611773 +0000 UTC m=+0.120939220 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9, release-0.7.12=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Feb 18 15:02:42 compute-0 nova_compute[189016]: 2026-02-18 15:02:42.158 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:44 compute-0 nova_compute[189016]: 2026-02-18 15:02:44.347 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:45 compute-0 nova_compute[189016]: 2026-02-18 15:02:45.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:45 compute-0 nova_compute[189016]: 2026-02-18 15:02:45.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:46 compute-0 nova_compute[189016]: 2026-02-18 15:02:46.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:46 compute-0 nova_compute[189016]: 2026-02-18 15:02:46.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:02:46 compute-0 nova_compute[189016]: 2026-02-18 15:02:46.887 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:02:46 compute-0 nova_compute[189016]: 2026-02-18 15:02:46.887 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:02:46 compute-0 nova_compute[189016]: 2026-02-18 15:02:46.888 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:02:47 compute-0 nova_compute[189016]: 2026-02-18 15:02:47.160 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.906 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.947 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.947 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.948 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.948 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.948 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.949 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.949 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.986 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.987 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.988 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:02:48 compute-0 nova_compute[189016]: 2026-02-18 15:02:48.988 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.159 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.217 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.218 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.291 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.292 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.343 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.344 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.356 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.396 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.404 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.463 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.464 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.522 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.522 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.570 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.571 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.622 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.630 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.683 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.684 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.742 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.743 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.795 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.796 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.885 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.891 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.941 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:49 compute-0 nova_compute[189016]: 2026-02-18 15:02:49.942 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.000 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.002 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.057 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.059 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.117 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.503 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.504 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4570MB free_disk=72.18173217773438GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.505 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.505 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.677 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.678 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.678 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.678 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.679 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.679 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.697 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing inventories for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.713 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating ProviderTree inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.713 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.726 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing aggregate associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 18 15:02:50 compute-0 podman[246345]: 2026-02-18 15:02:50.75182808 +0000 UTC m=+0.078450147 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0)
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.754 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing trait associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, traits: HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 18 15:02:50 compute-0 podman[246346]: 2026-02-18 15:02:50.766487884 +0000 UTC m=+0.087116981 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.845 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.860 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.862 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:02:50 compute-0 nova_compute[189016]: 2026-02-18 15:02:50.862 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:02:51 compute-0 nova_compute[189016]: 2026-02-18 15:02:51.965 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:52 compute-0 nova_compute[189016]: 2026-02-18 15:02:52.027 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:52 compute-0 nova_compute[189016]: 2026-02-18 15:02:52.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:02:52 compute-0 nova_compute[189016]: 2026-02-18 15:02:52.164 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:53 compute-0 podman[246387]: 2026-02-18 15:02:53.797263798 +0000 UTC m=+0.124729936 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 18 15:02:54 compute-0 nova_compute[189016]: 2026-02-18 15:02:54.359 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:57 compute-0 nova_compute[189016]: 2026-02-18 15:02:57.165 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:59 compute-0 nova_compute[189016]: 2026-02-18 15:02:59.364 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:02:59 compute-0 podman[204930]: time="2026-02-18T15:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:02:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:02:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Feb 18 15:03:01 compute-0 openstack_network_exporter[208107]: ERROR   15:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:03:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:03:01 compute-0 openstack_network_exporter[208107]: ERROR   15:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:03:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:03:02 compute-0 nova_compute[189016]: 2026-02-18 15:03:02.167 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:04 compute-0 nova_compute[189016]: 2026-02-18 15:03:04.367 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:06 compute-0 podman[246415]: 2026-02-18 15:03:06.739191251 +0000 UTC m=+0.068616379 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 15:03:06 compute-0 podman[246416]: 2026-02-18 15:03:06.74311533 +0000 UTC m=+0.070520117 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7)
Feb 18 15:03:07 compute-0 nova_compute[189016]: 2026-02-18 15:03:07.170 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:09 compute-0 nova_compute[189016]: 2026-02-18 15:03:09.372 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:12 compute-0 nova_compute[189016]: 2026-02-18 15:03:12.174 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:12 compute-0 podman[246456]: 2026-02-18 15:03:12.792106276 +0000 UTC m=+0.107745315 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 18 15:03:12 compute-0 podman[246457]: 2026-02-18 15:03:12.793028455 +0000 UTC m=+0.108774226 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 15:03:12 compute-0 podman[246458]: 2026-02-18 15:03:12.802373122 +0000 UTC m=+0.114468230 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, version=9.4)
Feb 18 15:03:14 compute-0 nova_compute[189016]: 2026-02-18 15:03:14.375 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:17 compute-0 nova_compute[189016]: 2026-02-18 15:03:17.178 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:19 compute-0 nova_compute[189016]: 2026-02-18 15:03:19.380 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:21 compute-0 podman[246513]: 2026-02-18 15:03:21.746236432 +0000 UTC m=+0.060202960 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 15:03:21 compute-0 podman[246512]: 2026-02-18 15:03:21.778930178 +0000 UTC m=+0.092521449 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 18 15:03:22 compute-0 nova_compute[189016]: 2026-02-18 15:03:22.220 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:24 compute-0 nova_compute[189016]: 2026-02-18 15:03:24.385 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:24 compute-0 podman[246554]: 2026-02-18 15:03:24.812254684 +0000 UTC m=+0.128405770 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.194 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.195 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.195 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.197 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.199 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.209 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9a9ee96c-8146-46a1-a098-5d021fb5e779', 'name': 'vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.214 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c469573f-54e2-4c7f-9223-77500b7b9ea2', 'name': 'vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.218 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.221 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d11e815-9fde-4624-9556-a726f1b266ba', 'name': 'vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.222 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.222 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.222 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.222 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T15:03:25.222829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.282 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 817918800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.283 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 168311221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.283 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.latency volume: 348237395 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.356 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 585395980 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.357 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 111231743 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.357 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 75467101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.423 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.423 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.423 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.485 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 693939868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.485 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 100498264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.486 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 81129348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.491 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.491 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.491 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.491 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.492 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.492 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.492 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.492 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T15:03:25.492133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.493 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.493 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.494 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.495 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.495 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.495 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.495 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.496 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.496 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.496 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.497 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.497 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.497 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.497 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.497 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.497 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.497 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.498 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.498 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.498 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.498 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.499 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.499 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T15:03:25.497677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.499 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.500 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.500 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.500 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.500 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.501 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.501 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.501 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.501 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.501 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.501 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T15:03:25.501888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.526 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.527 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.527 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.549 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 21897216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.550 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.550 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.571 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.571 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.572 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.594 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.594 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.594 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.595 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.595 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.596 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.596 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.596 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.596 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.596 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.596 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.597 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.597 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T15:03:25.596371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.597 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.598 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.598 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.598 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.599 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.599 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.599 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.599 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.600 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.601 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.601 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.601 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.601 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.601 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.601 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.602 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.602 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T15:03:25.601512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.602 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.603 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.603 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.603 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.603 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.604 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.604 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.604 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.604 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.605 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.605 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.605 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.605 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.606 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.606 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T15:03:25.606107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.610 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.614 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.617 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.620 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.621 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.621 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.621 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.621 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.621 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.621 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T15:03:25.621604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.640 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/cpu volume: 355880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.659 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/cpu volume: 34690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.679 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 38480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.702 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/cpu volume: 34450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.703 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.703 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.703 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.703 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.703 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.704 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.704 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 2190357319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.704 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 22293021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T15:03:25.703839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.704 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.705 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 2123501315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.705 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 9395246 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.705 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.705 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.705 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.706 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.706 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 42330684447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.706 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 18712803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.706 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.707 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.707 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.707 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.707 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.707 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.707 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.707 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T15:03:25.707539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.708 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.708 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.708 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.708 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.709 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.709 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.709 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.709 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.709 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.710 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T15:03:25.709523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.710 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.710 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.710 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.710 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T15:03:25.711377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.711 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.712 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.712 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.712 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.712 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.713 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.713 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.713 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.713 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.713 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.713 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.714 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.714 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.714 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.714 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.714 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.714 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.715 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T15:03:25.714712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.715 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.715 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.715 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.716 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.716 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.716 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.716 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.716 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.716 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T15:03:25.716389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.717 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.717 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.717 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.717 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.717 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.718 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.718 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T15:03:25.717835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.718 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.718 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.718 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.719 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.719 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.719 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.719 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.719 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.719 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.719 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes volume: 7440 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T15:03:25.719496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.720 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.720 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.720 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.720 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.720 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.721 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.721 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.721 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.721 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T15:03:25.721237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.722 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.722 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.722 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.722 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.722 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.722 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.722 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T15:03:25.722495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.723 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.723 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.723 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes.delta volume: 310 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.723 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.723 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T15:03:25.724446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.725 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.725 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.725 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.725 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/memory.usage volume: 48.94140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T15:03:25.726313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.726 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.727 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.727 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.727 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.727 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.727 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.727 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.727 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.728 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.728 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T15:03:25.728011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.728 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.728 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.728 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.729 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.729 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.729 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.729 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.729 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.730 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.730 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.730 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T15:03:25.731401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.732 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.733 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.733 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T15:03:25.733064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.733 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.733 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.733 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.734 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.734 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.734 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.734 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.734 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.734 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T15:03:25.734701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.735 15 DEBUG ceilometer.compute.pollsters [-] 9a9ee96c-8146-46a1-a098-5d021fb5e779/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.735 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.735 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.735 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.736 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.736 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.736 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.737 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.737 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.737 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.737 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.738 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.738 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.738 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.738 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.739 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.739 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.739 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.740 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.740 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:03:25.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:03:27 compute-0 nova_compute[189016]: 2026-02-18 15:03:27.223 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:29 compute-0 nova_compute[189016]: 2026-02-18 15:03:29.388 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:29 compute-0 podman[204930]: time="2026-02-18T15:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:03:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:03:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Feb 18 15:03:31 compute-0 openstack_network_exporter[208107]: ERROR   15:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:03:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:03:31 compute-0 openstack_network_exporter[208107]: ERROR   15:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:03:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:03:32 compute-0 nova_compute[189016]: 2026-02-18 15:03:32.225 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:34 compute-0 nova_compute[189016]: 2026-02-18 15:03:34.393 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:37 compute-0 nova_compute[189016]: 2026-02-18 15:03:37.227 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:37 compute-0 podman[246581]: 2026-02-18 15:03:37.772456458 +0000 UTC m=+0.074642721 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, release=1770267347, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Feb 18 15:03:37 compute-0 podman[246580]: 2026-02-18 15:03:37.799778846 +0000 UTC m=+0.101673553 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:03:39 compute-0 nova_compute[189016]: 2026-02-18 15:03:39.398 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:03:41.441 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:03:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:03:41.447 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:03:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:03:41.448 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:03:42 compute-0 nova_compute[189016]: 2026-02-18 15:03:42.229 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:43 compute-0 podman[246625]: 2026-02-18 15:03:43.737132809 +0000 UTC m=+0.064756081 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:03:43 compute-0 podman[246626]: 2026-02-18 15:03:43.796304337 +0000 UTC m=+0.108427738 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 18 15:03:43 compute-0 podman[246627]: 2026-02-18 15:03:43.825870211 +0000 UTC m=+0.141953631 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, config_id=kepler, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible)
Feb 18 15:03:44 compute-0 nova_compute[189016]: 2026-02-18 15:03:44.401 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:45 compute-0 nova_compute[189016]: 2026-02-18 15:03:45.044 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:45 compute-0 nova_compute[189016]: 2026-02-18 15:03:45.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:46 compute-0 nova_compute[189016]: 2026-02-18 15:03:46.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:46 compute-0 nova_compute[189016]: 2026-02-18 15:03:46.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:03:46 compute-0 nova_compute[189016]: 2026-02-18 15:03:46.711 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:03:46 compute-0 nova_compute[189016]: 2026-02-18 15:03:46.711 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:03:46 compute-0 nova_compute[189016]: 2026-02-18 15:03:46.712 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:03:47 compute-0 nova_compute[189016]: 2026-02-18 15:03:47.233 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:47 compute-0 nova_compute[189016]: 2026-02-18 15:03:47.976 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updating instance_info_cache with network_info: [{"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:03:48 compute-0 nova_compute[189016]: 2026-02-18 15:03:48.013 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:03:48 compute-0 nova_compute[189016]: 2026-02-18 15:03:48.015 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:03:48 compute-0 nova_compute[189016]: 2026-02-18 15:03:48.016 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:48 compute-0 nova_compute[189016]: 2026-02-18 15:03:48.016 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:48 compute-0 nova_compute[189016]: 2026-02-18 15:03:48.017 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.081 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.082 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.082 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.083 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.193 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.260 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.261 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.311 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.312 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.364 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.365 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.403 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.415 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.421 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.468 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.469 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.517 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.518 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.564 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.566 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.618 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.626 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.678 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.679 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.731 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.732 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.786 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.787 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.851 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.861 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.911 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.913 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.985 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:49 compute-0 nova_compute[189016]: 2026-02-18 15:03:49.986 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.037 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.038 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.089 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.450 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.451 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4577MB free_disk=72.18173217773438GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.451 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.452 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.535 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.535 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 9a9ee96c-8146-46a1-a098-5d021fb5e779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.535 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.535 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.536 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.536 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.637 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.656 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.658 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:03:50 compute-0 nova_compute[189016]: 2026-02-18 15:03:50.658 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:03:52 compute-0 nova_compute[189016]: 2026-02-18 15:03:52.234 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:52 compute-0 podman[246734]: 2026-02-18 15:03:52.729530572 +0000 UTC m=+0.058334803 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:03:52 compute-0 podman[246733]: 2026-02-18 15:03:52.774681509 +0000 UTC m=+0.103124043 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Feb 18 15:03:53 compute-0 nova_compute[189016]: 2026-02-18 15:03:53.658 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:54 compute-0 nova_compute[189016]: 2026-02-18 15:03:54.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:03:54 compute-0 nova_compute[189016]: 2026-02-18 15:03:54.408 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:55 compute-0 podman[246777]: 2026-02-18 15:03:55.808359732 +0000 UTC m=+0.127399389 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 18 15:03:57 compute-0 nova_compute[189016]: 2026-02-18 15:03:57.236 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:59 compute-0 nova_compute[189016]: 2026-02-18 15:03:59.412 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:03:59 compute-0 podman[204930]: time="2026-02-18T15:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:03:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:03:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 18 15:04:01 compute-0 openstack_network_exporter[208107]: ERROR   15:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:04:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:04:01 compute-0 openstack_network_exporter[208107]: ERROR   15:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:04:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:04:02 compute-0 nova_compute[189016]: 2026-02-18 15:04:02.239 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:04 compute-0 nova_compute[189016]: 2026-02-18 15:04:04.416 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:07 compute-0 nova_compute[189016]: 2026-02-18 15:04:07.241 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:08 compute-0 podman[246804]: 2026-02-18 15:04:08.778068551 +0000 UTC m=+0.085575159 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:04:08 compute-0 podman[246805]: 2026-02-18 15:04:08.784121633 +0000 UTC m=+0.091247633 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_id=openstack_network_exporter, io.buildah.version=1.33.7, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Feb 18 15:04:09 compute-0 nova_compute[189016]: 2026-02-18 15:04:09.419 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:12 compute-0 nova_compute[189016]: 2026-02-18 15:04:12.244 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:14 compute-0 nova_compute[189016]: 2026-02-18 15:04:14.422 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:14 compute-0 podman[246850]: 2026-02-18 15:04:14.773477365 +0000 UTC m=+0.082585472 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 18 15:04:14 compute-0 podman[246849]: 2026-02-18 15:04:14.796084184 +0000 UTC m=+0.105227211 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 15:04:14 compute-0 podman[246851]: 2026-02-18 15:04:14.804693311 +0000 UTC m=+0.111243473 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543)
Feb 18 15:04:15 compute-0 nova_compute[189016]: 2026-02-18 15:04:15.176 189020 DEBUG nova.compute.manager [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received event network-changed-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:04:15 compute-0 nova_compute[189016]: 2026-02-18 15:04:15.177 189020 DEBUG nova.compute.manager [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Refreshing instance network info cache due to event network-changed-578e1a09-d9b1-45b7-905b-69ab1a58cbe0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:04:15 compute-0 nova_compute[189016]: 2026-02-18 15:04:15.179 189020 DEBUG oslo_concurrency.lockutils [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:04:15 compute-0 nova_compute[189016]: 2026-02-18 15:04:15.179 189020 DEBUG oslo_concurrency.lockutils [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:04:15 compute-0 nova_compute[189016]: 2026-02-18 15:04:15.179 189020 DEBUG nova.network.neutron [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Refreshing network info cache for port 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.060 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.061 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.062 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.062 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.062 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.064 189020 INFO nova.compute.manager [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Terminating instance#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.065 189020 DEBUG nova.compute.manager [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 18 15:04:16 compute-0 kernel: tap578e1a09-d9 (unregistering): left promiscuous mode
Feb 18 15:04:16 compute-0 NetworkManager[57258]: <info>  [1771427056.1085] device (tap578e1a09-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 18 15:04:16 compute-0 ovn_controller[99062]: 2026-02-18T15:04:16Z|00050|binding|INFO|Releasing lport 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 from this chassis (sb_readonly=0)
Feb 18 15:04:16 compute-0 ovn_controller[99062]: 2026-02-18T15:04:16Z|00051|binding|INFO|Setting lport 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 down in Southbound
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.110 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 ovn_controller[99062]: 2026-02-18T15:04:16Z|00052|binding|INFO|Removing iface tap578e1a09-d9 ovn-installed in OVS
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.114 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.117 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.120 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.127 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:30:c3 192.168.0.167'], port_security=['fa:16:3e:65:30:c3 192.168.0.167'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-g63ccmz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-port-ngab67dd2bgj', 'neutron:cidrs': '192.168.0.167/24', 'neutron:device_id': '9a9ee96c-8146-46a1-a098-5d021fb5e779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-g63ccmz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-port-ngab67dd2bgj', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=578e1a09-d9b1-45b7-905b-69ab1a58cbe0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.128 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.130 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 578e1a09-d9b1-45b7-905b-69ab1a58cbe0 in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 unbound from our chassis#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.134 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c269c00a-f738-4cb6-ac67-09050c56f9f2#033[00m
Feb 18 15:04:16 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb 18 15:04:16 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 56.087s CPU time.
Feb 18 15:04:16 compute-0 systemd-machined[158361]: Machine qemu-2-instance-00000002 terminated.
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.164 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[da77faa9-ba4f-4610-ab8a-8ae9096798ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.204 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fad965-d075-4a65-9aba-553e0e05c05d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.209 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[0f19bc57-9d8c-4984-a7dc-42b37baf0ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.234 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8b49df-267e-41c1-9ae8-8fa5cf3ad350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.255 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[341c1b96-e66c-40db-9f8d-1a652751b04e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 30632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246919, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.279 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[e32bdff5-368f-42e1-9239-25559e4c9550]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346658, 'tstamp': 346658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246920, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346660, 'tstamp': 346660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246920, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.286 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.289 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.294 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.295 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc269c00a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.296 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.296 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc269c00a-f0, col_values=(('external_ids', {'iface-id': '7e592dc1-2432-46dc-b338-f9a04aad5932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:04:16 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:16.297 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.368 189020 INFO nova.virt.libvirt.driver [-] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Instance destroyed successfully.#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.369 189020 DEBUG nova.objects.instance [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'resources' on Instance uuid 9a9ee96c-8146-46a1-a098-5d021fb5e779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.394 189020 DEBUG nova.virt.libvirt.vif [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T14:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-3xrtcsvrbz4o-pvtg564w5ovi-vnf-e5gqxak74kuc',id=2,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-18T14:53:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-3rhtbt0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T14:53:51Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzIzNjMwODUwNTYwMTA2NzA4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Feb 18 15:04:16 compute-0 nova_compute[189016]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzIzNjMwODUwNTYwMTA2NzA4Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyMzYzMDg1MDU2MDEwNjcwODc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjM2MzA4NTA1NjAxMDY3MDg3PT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=9a9ee96c-8146-46a1-a098-5d021fb5e779,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.395 189020 DEBUG nova.network.os_vif_util [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.396 189020 DEBUG nova.network.os_vif_util [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:30:c3,bridge_name='br-int',has_traffic_filtering=True,id=578e1a09-d9b1-45b7-905b-69ab1a58cbe0,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap578e1a09-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.397 189020 DEBUG os_vif [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:30:c3,bridge_name='br-int',has_traffic_filtering=True,id=578e1a09-d9b1-45b7-905b-69ab1a58cbe0,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap578e1a09-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.399 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.400 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap578e1a09-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.402 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.404 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.404 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.411 189020 DEBUG nova.compute.manager [req-eaac1075-5550-434e-b043-ea18d0098eee req-5fa49d86-763a-41dd-8659-d30c764b5edd af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received event network-vif-unplugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.411 189020 DEBUG oslo_concurrency.lockutils [req-eaac1075-5550-434e-b043-ea18d0098eee req-5fa49d86-763a-41dd-8659-d30c764b5edd af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.412 189020 DEBUG oslo_concurrency.lockutils [req-eaac1075-5550-434e-b043-ea18d0098eee req-5fa49d86-763a-41dd-8659-d30c764b5edd af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.412 189020 DEBUG oslo_concurrency.lockutils [req-eaac1075-5550-434e-b043-ea18d0098eee req-5fa49d86-763a-41dd-8659-d30c764b5edd af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.412 189020 DEBUG nova.compute.manager [req-eaac1075-5550-434e-b043-ea18d0098eee req-5fa49d86-763a-41dd-8659-d30c764b5edd af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] No waiting events found dispatching network-vif-unplugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.412 189020 DEBUG nova.compute.manager [req-eaac1075-5550-434e-b043-ea18d0098eee req-5fa49d86-763a-41dd-8659-d30c764b5edd af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received event network-vif-unplugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.413 189020 INFO os_vif [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:30:c3,bridge_name='br-int',has_traffic_filtering=True,id=578e1a09-d9b1-45b7-905b-69ab1a58cbe0,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap578e1a09-d9')#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.414 189020 INFO nova.virt.libvirt.driver [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Deleting instance files /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779_del#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.415 189020 INFO nova.virt.libvirt.driver [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Deletion of /var/lib/nova/instances/9a9ee96c-8146-46a1-a098-5d021fb5e779_del complete#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.527 189020 DEBUG nova.virt.libvirt.host [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.528 189020 INFO nova.virt.libvirt.host [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] UEFI support detected#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.533 189020 INFO nova.compute.manager [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.534 189020 DEBUG oslo.service.loopingcall [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.535 189020 DEBUG nova.compute.manager [-] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.535 189020 DEBUG nova.network.neutron [-] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 18 15:04:16 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 15:04:16.394 189020 DEBUG nova.virt.libvirt.vif [None req-21c145c2-a8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.747 189020 DEBUG nova.network.neutron [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updated VIF entry in instance network info cache for port 578e1a09-d9b1-45b7-905b-69ab1a58cbe0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.747 189020 DEBUG nova.network.neutron [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [{"id": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "address": "fa:16:3e:65:30:c3", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.167", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578e1a09-d9", "ovs_interfaceid": "578e1a09-d9b1-45b7-905b-69ab1a58cbe0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:04:16 compute-0 nova_compute[189016]: 2026-02-18 15:04:16.779 189020 DEBUG oslo_concurrency.lockutils [req-a11f4fdc-028c-4a04-b39a-4ad5588692f6 req-d7690ec3-274c-4115-b435-d4df02b020c6 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-9a9ee96c-8146-46a1-a098-5d021fb5e779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:04:17 compute-0 nova_compute[189016]: 2026-02-18 15:04:17.247 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:17 compute-0 nova_compute[189016]: 2026-02-18 15:04:17.943 189020 DEBUG nova.network.neutron [-] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:04:17 compute-0 nova_compute[189016]: 2026-02-18 15:04:17.965 189020 INFO nova.compute.manager [-] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Took 1.43 seconds to deallocate network for instance.#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.006 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.006 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.118 189020 DEBUG nova.compute.provider_tree [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.138 189020 DEBUG nova.scheduler.client.report [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.176 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.206 189020 INFO nova.scheduler.client.report [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Deleted allocations for instance 9a9ee96c-8146-46a1-a098-5d021fb5e779#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.287 189020 DEBUG oslo_concurrency.lockutils [None req-21c145c2-a8eb-4f09-8f02-18e74a347cda 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.502 189020 DEBUG nova.compute.manager [req-25ca2870-74e9-49ef-9200-e505ab0aff55 req-b0aba279-f8fc-4bec-8b55-670240d2f20e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received event network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.502 189020 DEBUG oslo_concurrency.lockutils [req-25ca2870-74e9-49ef-9200-e505ab0aff55 req-b0aba279-f8fc-4bec-8b55-670240d2f20e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.503 189020 DEBUG oslo_concurrency.lockutils [req-25ca2870-74e9-49ef-9200-e505ab0aff55 req-b0aba279-f8fc-4bec-8b55-670240d2f20e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.503 189020 DEBUG oslo_concurrency.lockutils [req-25ca2870-74e9-49ef-9200-e505ab0aff55 req-b0aba279-f8fc-4bec-8b55-670240d2f20e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "9a9ee96c-8146-46a1-a098-5d021fb5e779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.503 189020 DEBUG nova.compute.manager [req-25ca2870-74e9-49ef-9200-e505ab0aff55 req-b0aba279-f8fc-4bec-8b55-670240d2f20e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] No waiting events found dispatching network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:04:18 compute-0 nova_compute[189016]: 2026-02-18 15:04:18.503 189020 WARNING nova.compute.manager [req-25ca2870-74e9-49ef-9200-e505ab0aff55 req-b0aba279-f8fc-4bec-8b55-670240d2f20e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Received unexpected event network-vif-plugged-578e1a09-d9b1-45b7-905b-69ab1a58cbe0 for instance with vm_state deleted and task_state None.#033[00m
Feb 18 15:04:21 compute-0 nova_compute[189016]: 2026-02-18 15:04:21.403 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:22 compute-0 nova_compute[189016]: 2026-02-18 15:04:22.250 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:23 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:23.128 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:04:23 compute-0 podman[246942]: 2026-02-18 15:04:23.754067367 +0000 UTC m=+0.071375349 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 18 15:04:23 compute-0 podman[246943]: 2026-02-18 15:04:23.774664106 +0000 UTC m=+0.094505532 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 15:04:26 compute-0 nova_compute[189016]: 2026-02-18 15:04:26.406 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:26 compute-0 podman[246987]: 2026-02-18 15:04:26.849115999 +0000 UTC m=+0.153223891 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 15:04:27 compute-0 nova_compute[189016]: 2026-02-18 15:04:27.253 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:29 compute-0 podman[204930]: time="2026-02-18T15:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:04:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:04:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 18 15:04:31 compute-0 nova_compute[189016]: 2026-02-18 15:04:31.361 189020 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771427056.3584776, 9a9ee96c-8146-46a1-a098-5d021fb5e779 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:04:31 compute-0 nova_compute[189016]: 2026-02-18 15:04:31.363 189020 INFO nova.compute.manager [-] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] VM Stopped (Lifecycle Event)#033[00m
Feb 18 15:04:31 compute-0 nova_compute[189016]: 2026-02-18 15:04:31.393 189020 DEBUG nova.compute.manager [None req-3a85d70f-97c2-44ef-bef6-54f66e7d910a - - - - - -] [instance: 9a9ee96c-8146-46a1-a098-5d021fb5e779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:04:31 compute-0 nova_compute[189016]: 2026-02-18 15:04:31.408 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:31 compute-0 openstack_network_exporter[208107]: ERROR   15:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:04:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:04:31 compute-0 openstack_network_exporter[208107]: ERROR   15:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:04:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:04:32 compute-0 nova_compute[189016]: 2026-02-18 15:04:32.255 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:36 compute-0 nova_compute[189016]: 2026-02-18 15:04:36.411 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:37 compute-0 nova_compute[189016]: 2026-02-18 15:04:37.257 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:39 compute-0 podman[247015]: 2026-02-18 15:04:39.759478165 +0000 UTC m=+0.070894776 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:04:39 compute-0 podman[247016]: 2026-02-18 15:04:39.764398969 +0000 UTC m=+0.087030533 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.7, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 18 15:04:41 compute-0 nova_compute[189016]: 2026-02-18 15:04:41.415 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:41.443 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:41.445 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:04:41.447 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:42 compute-0 nova_compute[189016]: 2026-02-18 15:04:42.261 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:45 compute-0 podman[247062]: 2026-02-18 15:04:45.748740873 +0000 UTC m=+0.066169828 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 18 15:04:45 compute-0 podman[247063]: 2026-02-18 15:04:45.777766774 +0000 UTC m=+0.096868591 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 18 15:04:45 compute-0 podman[247064]: 2026-02-18 15:04:45.785809707 +0000 UTC m=+0.103566650 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Feb 18 15:04:46 compute-0 nova_compute[189016]: 2026-02-18 15:04:46.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:46 compute-0 nova_compute[189016]: 2026-02-18 15:04:46.418 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:47 compute-0 nova_compute[189016]: 2026-02-18 15:04:47.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:47 compute-0 nova_compute[189016]: 2026-02-18 15:04:47.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:04:47 compute-0 nova_compute[189016]: 2026-02-18 15:04:47.264 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:47 compute-0 nova_compute[189016]: 2026-02-18 15:04:47.395 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:04:47 compute-0 nova_compute[189016]: 2026-02-18 15:04:47.396 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:04:47 compute-0 nova_compute[189016]: 2026-02-18 15:04:47.396 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:04:48 compute-0 nova_compute[189016]: 2026-02-18 15:04:48.786 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updating instance_info_cache with network_info: [{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:04:48 compute-0 nova_compute[189016]: 2026-02-18 15:04:48.815 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:04:48 compute-0 nova_compute[189016]: 2026-02-18 15:04:48.816 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:04:48 compute-0 nova_compute[189016]: 2026-02-18 15:04:48.817 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:48 compute-0 nova_compute[189016]: 2026-02-18 15:04:48.817 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:48 compute-0 nova_compute[189016]: 2026-02-18 15:04:48.818 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:04:49 compute-0 nova_compute[189016]: 2026-02-18 15:04:49.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:50 compute-0 nova_compute[189016]: 2026-02-18 15:04:50.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:50 compute-0 ovn_controller[99062]: 2026-02-18T15:04:50Z|00053|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.046 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.081 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.109 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.109 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.110 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.110 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.259 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.349 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.350 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.401 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.403 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.421 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.457 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.457 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.517 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.525 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.577 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.578 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.630 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.632 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.682 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.683 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.736 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.746 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.800 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.801 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.855 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.857 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.913 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.915 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:04:51 compute-0 nova_compute[189016]: 2026-02-18 15:04:51.970 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.265 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.324 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.325 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4711MB free_disk=72.20406723022461GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.325 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.326 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.867 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.868 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.868 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.869 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:04:52 compute-0 nova_compute[189016]: 2026-02-18 15:04:52.869 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:04:53 compute-0 nova_compute[189016]: 2026-02-18 15:04:53.051 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:04:53 compute-0 nova_compute[189016]: 2026-02-18 15:04:53.076 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:04:53 compute-0 nova_compute[189016]: 2026-02-18 15:04:53.101 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:04:53 compute-0 nova_compute[189016]: 2026-02-18 15:04:53.102 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:04:54 compute-0 podman[247156]: 2026-02-18 15:04:54.747231887 +0000 UTC m=+0.065712186 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:04:54 compute-0 podman[247155]: 2026-02-18 15:04:54.776484764 +0000 UTC m=+0.094323807 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 18 15:04:55 compute-0 nova_compute[189016]: 2026-02-18 15:04:55.072 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:56 compute-0 nova_compute[189016]: 2026-02-18 15:04:56.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:56 compute-0 nova_compute[189016]: 2026-02-18 15:04:56.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:56 compute-0 nova_compute[189016]: 2026-02-18 15:04:56.423 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:57 compute-0 nova_compute[189016]: 2026-02-18 15:04:57.064 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:04:57 compute-0 nova_compute[189016]: 2026-02-18 15:04:57.065 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 18 15:04:57 compute-0 nova_compute[189016]: 2026-02-18 15:04:57.269 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:04:57 compute-0 podman[247198]: 2026-02-18 15:04:57.763635357 +0000 UTC m=+0.093236969 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:04:59 compute-0 podman[204930]: time="2026-02-18T15:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:04:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:04:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 18 15:05:01 compute-0 nova_compute[189016]: 2026-02-18 15:05:01.066 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:01 compute-0 nova_compute[189016]: 2026-02-18 15:05:01.067 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 18 15:05:01 compute-0 nova_compute[189016]: 2026-02-18 15:05:01.082 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 18 15:05:01 compute-0 openstack_network_exporter[208107]: ERROR   15:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:05:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:05:01 compute-0 openstack_network_exporter[208107]: ERROR   15:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:05:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:05:01 compute-0 nova_compute[189016]: 2026-02-18 15:05:01.426 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:02 compute-0 nova_compute[189016]: 2026-02-18 15:05:02.271 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:06 compute-0 nova_compute[189016]: 2026-02-18 15:05:06.428 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:07 compute-0 nova_compute[189016]: 2026-02-18 15:05:07.273 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:10 compute-0 podman[247224]: 2026-02-18 15:05:10.785177406 +0000 UTC m=+0.107378348 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:05:10 compute-0 podman[247225]: 2026-02-18 15:05:10.788644493 +0000 UTC m=+0.106304771 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, managed_by=edpm_ansible, release=1770267347)
Feb 18 15:05:11 compute-0 nova_compute[189016]: 2026-02-18 15:05:11.431 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:12 compute-0 nova_compute[189016]: 2026-02-18 15:05:12.275 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:16 compute-0 nova_compute[189016]: 2026-02-18 15:05:16.433 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:16 compute-0 podman[247269]: 2026-02-18 15:05:16.744340561 +0000 UTC m=+0.064625418 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 15:05:16 compute-0 podman[247270]: 2026-02-18 15:05:16.745731846 +0000 UTC m=+0.063429548 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi)
Feb 18 15:05:16 compute-0 podman[247271]: 2026-02-18 15:05:16.794080515 +0000 UTC m=+0.107500030 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, architecture=x86_64)
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.278 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.604 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.771 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Triggering sync for uuid debb3011-9258-4f04-9eb4-592cc56eb3eb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.772 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Triggering sync for uuid c469573f-54e2-4c7f-9223-77500b7b9ea2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.772 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Triggering sync for uuid 5d11e815-9fde-4624-9556-a726f1b266ba _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.772 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.772 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.773 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.774 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.774 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.774 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "5d11e815-9fde-4624-9556-a726f1b266ba" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.844 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.845 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:05:17 compute-0 nova_compute[189016]: 2026-02-18 15:05:17.882 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "5d11e815-9fde-4624-9556-a726f1b266ba" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:05:21 compute-0 nova_compute[189016]: 2026-02-18 15:05:21.436 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:22 compute-0 nova_compute[189016]: 2026-02-18 15:05:22.281 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.196 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.197 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.210 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c469573f-54e2-4c7f-9223-77500b7b9ea2', 'name': 'vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.214 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.218 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d11e815-9fde-4624-9556-a726f1b266ba', 'name': 'vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.219 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T15:05:25.219811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.330 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 585395980 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.331 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 111231743 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.332 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.latency volume: 75467101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.425 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.426 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.426 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.503 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 693939868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.504 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 100498264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.504 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 81129348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.505 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.505 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.505 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.505 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.506 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.506 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.506 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.506 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.507 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.507 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.507 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.508 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.508 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.508 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.509 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.509 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T15:05:25.506173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.510 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.510 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.510 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.510 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.510 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.510 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.511 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.511 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.511 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T15:05:25.510549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.512 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.512 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.512 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.513 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.513 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.514 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.514 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.514 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.514 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.514 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.514 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T15:05:25.514870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.537 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 21897216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.538 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.538 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.560 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.560 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.560 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.585 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.585 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.586 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.586 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.586 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.586 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.586 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.587 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.587 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.587 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.587 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T15:05:25.587155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.588 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.588 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.588 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.588 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.589 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.589 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.589 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.590 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.590 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.590 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.590 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.590 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.590 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.591 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T15:05:25.590725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.591 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.591 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.591 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.592 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.592 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.592 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.592 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.592 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.593 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.593 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.593 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.593 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.593 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.594 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T15:05:25.594063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.598 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.601 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.604 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.604 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.605 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.605 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.605 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.605 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.605 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T15:05:25.605433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.623 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/cpu volume: 35920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.650 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 39700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.668 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/cpu volume: 35690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.668 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 2123501315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 9395246 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.669 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.670 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.670 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.670 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.670 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 42330684447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.671 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 18712803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.671 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.671 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.671 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.672 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.672 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T15:05:25.669433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.672 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.672 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.672 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.673 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T15:05:25.672525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.673 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.674 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.674 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.674 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.674 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.674 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.674 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.674 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.675 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.675 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.676 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.676 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T15:05:25.674736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.676 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.676 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.676 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.676 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.677 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T15:05:25.676822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.677 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.677 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.677 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.677 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.677 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.678 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.678 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.678 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.678 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.679 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.679 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.679 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.679 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.679 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.679 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.679 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.680 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.680 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.680 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.680 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.680 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.680 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.680 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.681 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.681 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.681 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.681 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.681 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.681 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.681 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.682 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.682 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.682 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T15:05:25.679405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T15:05:25.680895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T15:05:25.681818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.683 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.684 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.684 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.684 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.684 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.684 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.684 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.685 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.685 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.685 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.685 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.685 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.685 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.685 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.686 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.686 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.686 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.686 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.686 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.686 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.687 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.687 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.687 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.687 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.687 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.687 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T15:05:25.683396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T15:05:25.684826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T15:05:25.685841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T15:05:25.687274) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.688 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/memory.usage volume: 48.98828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.689 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.689 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.689 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.689 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.689 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.690 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.690 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.690 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.690 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.690 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.690 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.691 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.691 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.691 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.691 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.691 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.692 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T15:05:25.688644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T15:05:25.690203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.692 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.692 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.693 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.693 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.693 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.693 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.693 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.693 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.693 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.694 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.694 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.694 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.694 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.694 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.695 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T15:05:25.693254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.695 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.695 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.695 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.695 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T15:05:25.694928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T15:05:25.696427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.696 15 DEBUG ceilometer.compute.pollsters [-] c469573f-54e2-4c7f-9223-77500b7b9ea2/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.697 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.697 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.697 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.697 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.697 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.698 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.698 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.698 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.698 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.698 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.698 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.699 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.700 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:05:25.701 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:05:25 compute-0 podman[247327]: 2026-02-18 15:05:25.738433988 +0000 UTC m=+0.054218477 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:05:25 compute-0 podman[247326]: 2026-02-18 15:05:25.75566195 +0000 UTC m=+0.072302511 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2)
Feb 18 15:05:26 compute-0 nova_compute[189016]: 2026-02-18 15:05:26.438 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:27 compute-0 nova_compute[189016]: 2026-02-18 15:05:27.283 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:28 compute-0 podman[247367]: 2026-02-18 15:05:28.856947373 +0000 UTC m=+0.175846362 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 18 15:05:29 compute-0 podman[204930]: time="2026-02-18T15:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:05:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:05:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Feb 18 15:05:31 compute-0 openstack_network_exporter[208107]: ERROR   15:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:05:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:05:31 compute-0 openstack_network_exporter[208107]: ERROR   15:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:05:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:05:31 compute-0 nova_compute[189016]: 2026-02-18 15:05:31.440 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:32 compute-0 nova_compute[189016]: 2026-02-18 15:05:32.286 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:36 compute-0 nova_compute[189016]: 2026-02-18 15:05:36.443 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:37 compute-0 nova_compute[189016]: 2026-02-18 15:05:37.289 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:41 compute-0 nova_compute[189016]: 2026-02-18 15:05:41.446 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:05:41.446 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:05:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:05:41.450 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:05:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:05:41.451 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:05:41 compute-0 podman[247391]: 2026-02-18 15:05:41.773109215 +0000 UTC m=+0.081194373 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 15:05:41 compute-0 podman[247392]: 2026-02-18 15:05:41.775790012 +0000 UTC m=+0.084436524 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.7, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Feb 18 15:05:42 compute-0 nova_compute[189016]: 2026-02-18 15:05:42.293 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:46 compute-0 nova_compute[189016]: 2026-02-18 15:05:46.449 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:47 compute-0 nova_compute[189016]: 2026-02-18 15:05:47.215 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:47 compute-0 nova_compute[189016]: 2026-02-18 15:05:47.216 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:47 compute-0 nova_compute[189016]: 2026-02-18 15:05:47.295 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:47 compute-0 podman[247433]: 2026-02-18 15:05:47.738055121 +0000 UTC m=+0.064637589 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 18 15:05:47 compute-0 podman[247434]: 2026-02-18 15:05:47.767118288 +0000 UTC m=+0.093835389 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 18 15:05:47 compute-0 podman[247435]: 2026-02-18 15:05:47.772014001 +0000 UTC m=+0.092771343 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container, release-0.7.12=, vcs-type=git, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc.)
Feb 18 15:05:48 compute-0 nova_compute[189016]: 2026-02-18 15:05:48.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:48 compute-0 nova_compute[189016]: 2026-02-18 15:05:48.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:05:49 compute-0 nova_compute[189016]: 2026-02-18 15:05:49.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:49 compute-0 nova_compute[189016]: 2026-02-18 15:05:49.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:05:49 compute-0 nova_compute[189016]: 2026-02-18 15:05:49.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:05:49 compute-0 nova_compute[189016]: 2026-02-18 15:05:49.815 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:05:49 compute-0 nova_compute[189016]: 2026-02-18 15:05:49.815 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:05:49 compute-0 nova_compute[189016]: 2026-02-18 15:05:49.816 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:05:49 compute-0 nova_compute[189016]: 2026-02-18 15:05:49.816 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:05:51 compute-0 nova_compute[189016]: 2026-02-18 15:05:51.271 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:05:51 compute-0 nova_compute[189016]: 2026-02-18 15:05:51.291 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:05:51 compute-0 nova_compute[189016]: 2026-02-18 15:05:51.292 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:05:51 compute-0 nova_compute[189016]: 2026-02-18 15:05:51.292 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:51 compute-0 nova_compute[189016]: 2026-02-18 15:05:51.293 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:51 compute-0 nova_compute[189016]: 2026-02-18 15:05:51.452 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.082 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.083 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.083 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.083 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.189 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.253 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.255 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.296 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.307 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.309 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.362 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.364 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.413 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.420 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.470 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.471 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.522 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.523 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.587 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.590 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.658 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.664 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.723 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.724 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.775 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.776 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.827 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.828 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:05:52 compute-0 nova_compute[189016]: 2026-02-18 15:05:52.878 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.192 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.194 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4710MB free_disk=72.20197296142578GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.194 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.195 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.278 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.278 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance c469573f-54e2-4c7f-9223-77500b7b9ea2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.278 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.278 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.278 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.359 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.373 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.375 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:05:53 compute-0 nova_compute[189016]: 2026-02-18 15:05:53.375 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:05:56 compute-0 nova_compute[189016]: 2026-02-18 15:05:56.375 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:56 compute-0 nova_compute[189016]: 2026-02-18 15:05:56.454 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:56 compute-0 podman[247523]: 2026-02-18 15:05:56.735282056 +0000 UTC m=+0.060147596 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 18 15:05:56 compute-0 podman[247524]: 2026-02-18 15:05:56.757868192 +0000 UTC m=+0.073306606 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:05:57 compute-0 nova_compute[189016]: 2026-02-18 15:05:57.299 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:05:58 compute-0 nova_compute[189016]: 2026-02-18 15:05:58.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:05:59 compute-0 podman[204930]: time="2026-02-18T15:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:05:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:05:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Feb 18 15:05:59 compute-0 podman[247567]: 2026-02-18 15:05:59.769096601 +0000 UTC m=+0.093682405 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:06:01 compute-0 openstack_network_exporter[208107]: ERROR   15:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:06:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:06:01 compute-0 openstack_network_exporter[208107]: ERROR   15:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:06:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:06:01 compute-0 nova_compute[189016]: 2026-02-18 15:06:01.457 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:02 compute-0 nova_compute[189016]: 2026-02-18 15:06:02.301 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:06 compute-0 nova_compute[189016]: 2026-02-18 15:06:06.458 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:07 compute-0 nova_compute[189016]: 2026-02-18 15:06:07.303 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:11 compute-0 nova_compute[189016]: 2026-02-18 15:06:11.461 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:12 compute-0 nova_compute[189016]: 2026-02-18 15:06:12.305 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:12 compute-0 podman[247594]: 2026-02-18 15:06:12.74581102 +0000 UTC m=+0.069493750 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 15:06:12 compute-0 podman[247595]: 2026-02-18 15:06:12.755050661 +0000 UTC m=+0.077947191 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1770267347, vcs-type=git, managed_by=edpm_ansible, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 18 15:06:16 compute-0 nova_compute[189016]: 2026-02-18 15:06:16.183 189020 DEBUG nova.compute.manager [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received event network-changed-f12fffeb-5027-4ab9-8d51-b603e9bfbedd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:06:16 compute-0 nova_compute[189016]: 2026-02-18 15:06:16.185 189020 DEBUG nova.compute.manager [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Refreshing instance network info cache due to event network-changed-f12fffeb-5027-4ab9-8d51-b603e9bfbedd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:06:16 compute-0 nova_compute[189016]: 2026-02-18 15:06:16.186 189020 DEBUG oslo_concurrency.lockutils [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:06:16 compute-0 nova_compute[189016]: 2026-02-18 15:06:16.186 189020 DEBUG oslo_concurrency.lockutils [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:06:16 compute-0 nova_compute[189016]: 2026-02-18 15:06:16.186 189020 DEBUG nova.network.neutron [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Refreshing network info cache for port f12fffeb-5027-4ab9-8d51-b603e9bfbedd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:06:16 compute-0 nova_compute[189016]: 2026-02-18 15:06:16.464 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.029 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.033 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.033 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.039 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.307 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.333 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.334 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.334 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.334 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.335 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.337 189020 INFO nova.compute.manager [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Terminating instance#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.338 189020 DEBUG nova.compute.manager [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 18 15:06:17 compute-0 kernel: tapf12fffeb-50 (unregistering): left promiscuous mode
Feb 18 15:06:17 compute-0 NetworkManager[57258]: <info>  [1771427177.4474] device (tapf12fffeb-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.452 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 ovn_controller[99062]: 2026-02-18T15:06:17Z|00054|binding|INFO|Releasing lport f12fffeb-5027-4ab9-8d51-b603e9bfbedd from this chassis (sb_readonly=0)
Feb 18 15:06:17 compute-0 ovn_controller[99062]: 2026-02-18T15:06:17Z|00055|binding|INFO|Setting lport f12fffeb-5027-4ab9-8d51-b603e9bfbedd down in Southbound
Feb 18 15:06:17 compute-0 ovn_controller[99062]: 2026-02-18T15:06:17Z|00056|binding|INFO|Removing iface tapf12fffeb-50 ovn-installed in OVS
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.460 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.466 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.484 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:5f:2a 192.168.0.126'], port_security=['fa:16:3e:8f:5f:2a 192.168.0.126'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-g63ccmz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-port-nheb7ynxu42a', 'neutron:cidrs': '192.168.0.126/24', 'neutron:device_id': 'c469573f-54e2-4c7f-9223-77500b7b9ea2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-g63ccmz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-port-nheb7ynxu42a', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=f12fffeb-5027-4ab9-8d51-b603e9bfbedd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.487 108400 INFO neutron.agent.ovn.metadata.agent [-] Port f12fffeb-5027-4ab9-8d51-b603e9bfbedd in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 unbound from our chassis#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.489 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c269c00a-f738-4cb6-ac67-09050c56f9f2#033[00m
Feb 18 15:06:17 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 18 15:06:17 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 22.015s CPU time.
Feb 18 15:06:17 compute-0 systemd-machined[158361]: Machine qemu-3-instance-00000003 terminated.
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.509 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1cba9f-8f44-4461-924e-319757ab36f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.537 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcd12c7-fa69-47f0-85d5-d38f3e341bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.541 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[f85de200-5b91-40fa-953b-bcbade09f61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.571 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[75cce1dc-6346-4fd4-a5c3-6c6d87211ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.590 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[77096e43-ece0-41e3-b859-e9be49447dfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 30632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247664, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.602 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[2947dac6-44e4-4fca-9a25-146ef4fa3c93]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346658, 'tstamp': 346658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247671, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346660, 'tstamp': 346660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247671, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.606 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.608 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.614 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.616 189020 INFO nova.virt.libvirt.driver [-] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Instance destroyed successfully.#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.616 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc269c00a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.616 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.617 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc269c00a-f0, col_values=(('external_ids', {'iface-id': '7e592dc1-2432-46dc-b338-f9a04aad5932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:06:17 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:17.617 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.617 189020 DEBUG nova.objects.instance [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'resources' on Instance uuid c469573f-54e2-4c7f-9223-77500b7b9ea2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.648 189020 DEBUG nova.virt.libvirt.vif [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T14:58:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-jqt57gn5jekh-x5ozyrkmej5j-vnf-jpd2avxlpe2p',id=3,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-18T14:58:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-b2fajt5r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T14:58:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjk3NTA1Mjg0NDg2ODU2NjM1Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Feb 18 15:06:17 compute-0 nova_compute[189016]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjk3NTA1Mjg0NDg2ODU2NjM1Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI5NzUwNTI4NDQ4Njg1NjYzNTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yOTc1MDUyODQ0ODY4NTY2MzUyPT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=c469573f-54e2-4c7f-9223-77500b7b9ea2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.648 189020 DEBUG nova.network.os_vif_util [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.649 189020 DEBUG nova.network.os_vif_util [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:5f:2a,bridge_name='br-int',has_traffic_filtering=True,id=f12fffeb-5027-4ab9-8d51-b603e9bfbedd,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf12fffeb-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.650 189020 DEBUG os_vif [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:5f:2a,bridge_name='br-int',has_traffic_filtering=True,id=f12fffeb-5027-4ab9-8d51-b603e9bfbedd,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf12fffeb-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.652 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.652 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf12fffeb-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.657 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.663 189020 INFO os_vif [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:5f:2a,bridge_name='br-int',has_traffic_filtering=True,id=f12fffeb-5027-4ab9-8d51-b603e9bfbedd,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf12fffeb-50')#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.664 189020 INFO nova.virt.libvirt.driver [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Deleting instance files /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2_del#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.665 189020 INFO nova.virt.libvirt.driver [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Deletion of /var/lib/nova/instances/c469573f-54e2-4c7f-9223-77500b7b9ea2_del complete#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.874 189020 INFO nova.compute.manager [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.875 189020 DEBUG oslo.service.loopingcall [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.875 189020 DEBUG nova.compute.manager [-] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 18 15:06:17 compute-0 nova_compute[189016]: 2026-02-18 15:06:17.875 189020 DEBUG nova.network.neutron [-] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 18 15:06:17 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 15:06:17.648 189020 DEBUG nova.virt.libvirt.vif [None req-7726469f-bd [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.237 189020 DEBUG nova.network.neutron [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updated VIF entry in instance network info cache for port f12fffeb-5027-4ab9-8d51-b603e9bfbedd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.238 189020 DEBUG nova.network.neutron [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updating instance_info_cache with network_info: [{"id": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "address": "fa:16:3e:8f:5f:2a", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.126", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf12fffeb-50", "ovs_interfaceid": "f12fffeb-5027-4ab9-8d51-b603e9bfbedd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.298 189020 DEBUG oslo_concurrency.lockutils [req-71502840-6ad2-4298-ab20-b80b21e865ff req-73847293-9bbf-416c-b7bf-c0bf0a40d13f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-c469573f-54e2-4c7f-9223-77500b7b9ea2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.301 189020 DEBUG nova.compute.manager [req-92dbe21f-e6c2-485d-9a2f-ea2ad3bff16e req-94cba79a-0b0e-4cff-b5c8-2b2c7c17426c af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received event network-vif-unplugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.301 189020 DEBUG oslo_concurrency.lockutils [req-92dbe21f-e6c2-485d-9a2f-ea2ad3bff16e req-94cba79a-0b0e-4cff-b5c8-2b2c7c17426c af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.302 189020 DEBUG oslo_concurrency.lockutils [req-92dbe21f-e6c2-485d-9a2f-ea2ad3bff16e req-94cba79a-0b0e-4cff-b5c8-2b2c7c17426c af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.303 189020 DEBUG oslo_concurrency.lockutils [req-92dbe21f-e6c2-485d-9a2f-ea2ad3bff16e req-94cba79a-0b0e-4cff-b5c8-2b2c7c17426c af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.304 189020 DEBUG nova.compute.manager [req-92dbe21f-e6c2-485d-9a2f-ea2ad3bff16e req-94cba79a-0b0e-4cff-b5c8-2b2c7c17426c af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] No waiting events found dispatching network-vif-unplugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:06:18 compute-0 nova_compute[189016]: 2026-02-18 15:06:18.304 189020 DEBUG nova.compute.manager [req-92dbe21f-e6c2-485d-9a2f-ea2ad3bff16e req-94cba79a-0b0e-4cff-b5c8-2b2c7c17426c af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received event network-vif-unplugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 18 15:06:18 compute-0 podman[247676]: 2026-02-18 15:06:18.733116479 +0000 UTC m=+0.059797138 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:06:18 compute-0 podman[247677]: 2026-02-18 15:06:18.769590192 +0000 UTC m=+0.084985258 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 15:06:18 compute-0 podman[247678]: 2026-02-18 15:06:18.78988459 +0000 UTC m=+0.096087296 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.337 189020 DEBUG nova.network.neutron [-] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.360 189020 INFO nova.compute.manager [-] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Took 1.48 seconds to deallocate network for instance.#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.428 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.429 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.610 189020 DEBUG nova.compute.provider_tree [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.633 189020 DEBUG nova.scheduler.client.report [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.670 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.724 189020 INFO nova.scheduler.client.report [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Deleted allocations for instance c469573f-54e2-4c7f-9223-77500b7b9ea2#033[00m
Feb 18 15:06:19 compute-0 nova_compute[189016]: 2026-02-18 15:06:19.798 189020 DEBUG oslo_concurrency.lockutils [None req-7726469f-bd82-473a-82b5-bb3807d8e04b 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:20 compute-0 nova_compute[189016]: 2026-02-18 15:06:20.383 189020 DEBUG nova.compute.manager [req-c27f84ca-a5ad-4098-beac-1f43c4f69e9c req-84a84036-b34c-4b5d-a389-a1cb8514187e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received event network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:06:20 compute-0 nova_compute[189016]: 2026-02-18 15:06:20.384 189020 DEBUG oslo_concurrency.lockutils [req-c27f84ca-a5ad-4098-beac-1f43c4f69e9c req-84a84036-b34c-4b5d-a389-a1cb8514187e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:20 compute-0 nova_compute[189016]: 2026-02-18 15:06:20.384 189020 DEBUG oslo_concurrency.lockutils [req-c27f84ca-a5ad-4098-beac-1f43c4f69e9c req-84a84036-b34c-4b5d-a389-a1cb8514187e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:20 compute-0 nova_compute[189016]: 2026-02-18 15:06:20.385 189020 DEBUG oslo_concurrency.lockutils [req-c27f84ca-a5ad-4098-beac-1f43c4f69e9c req-84a84036-b34c-4b5d-a389-a1cb8514187e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "c469573f-54e2-4c7f-9223-77500b7b9ea2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:20 compute-0 nova_compute[189016]: 2026-02-18 15:06:20.385 189020 DEBUG nova.compute.manager [req-c27f84ca-a5ad-4098-beac-1f43c4f69e9c req-84a84036-b34c-4b5d-a389-a1cb8514187e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] No waiting events found dispatching network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:06:20 compute-0 nova_compute[189016]: 2026-02-18 15:06:20.385 189020 WARNING nova.compute.manager [req-c27f84ca-a5ad-4098-beac-1f43c4f69e9c req-84a84036-b34c-4b5d-a389-a1cb8514187e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Received unexpected event network-vif-plugged-f12fffeb-5027-4ab9-8d51-b603e9bfbedd for instance with vm_state deleted and task_state None.#033[00m
Feb 18 15:06:22 compute-0 nova_compute[189016]: 2026-02-18 15:06:22.310 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:22 compute-0 nova_compute[189016]: 2026-02-18 15:06:22.658 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:27 compute-0 nova_compute[189016]: 2026-02-18 15:06:27.312 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:27 compute-0 nova_compute[189016]: 2026-02-18 15:06:27.660 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:27 compute-0 podman[247737]: 2026-02-18 15:06:27.746930117 +0000 UTC m=+0.068422513 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 18 15:06:27 compute-0 podman[247738]: 2026-02-18 15:06:27.763499582 +0000 UTC m=+0.082268470 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 15:06:29 compute-0 podman[204930]: time="2026-02-18T15:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:06:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:06:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 18 15:06:30 compute-0 podman[247778]: 2026-02-18 15:06:30.757811727 +0000 UTC m=+0.085203953 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:06:31 compute-0 openstack_network_exporter[208107]: ERROR   15:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:06:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:06:31 compute-0 openstack_network_exporter[208107]: ERROR   15:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:06:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:06:32 compute-0 nova_compute[189016]: 2026-02-18 15:06:32.315 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:32 compute-0 nova_compute[189016]: 2026-02-18 15:06:32.614 189020 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771427177.6121247, c469573f-54e2-4c7f-9223-77500b7b9ea2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:06:32 compute-0 nova_compute[189016]: 2026-02-18 15:06:32.615 189020 INFO nova.compute.manager [-] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] VM Stopped (Lifecycle Event)#033[00m
Feb 18 15:06:32 compute-0 nova_compute[189016]: 2026-02-18 15:06:32.634 189020 DEBUG nova.compute.manager [None req-d88b3b79-796f-4b58-b1f8-cbe78b0cfdf6 - - - - - -] [instance: c469573f-54e2-4c7f-9223-77500b7b9ea2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:06:32 compute-0 nova_compute[189016]: 2026-02-18 15:06:32.662 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:37 compute-0 nova_compute[189016]: 2026-02-18 15:06:37.319 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:37 compute-0 nova_compute[189016]: 2026-02-18 15:06:37.664 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:41.446 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:41.448 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:06:41.448 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:42 compute-0 nova_compute[189016]: 2026-02-18 15:06:42.323 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:42 compute-0 nova_compute[189016]: 2026-02-18 15:06:42.667 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:43 compute-0 podman[247807]: 2026-02-18 15:06:43.742918254 +0000 UTC m=+0.067333076 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1770267347, vcs-type=git, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter)
Feb 18 15:06:43 compute-0 podman[247806]: 2026-02-18 15:06:43.745619112 +0000 UTC m=+0.072885175 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:06:47 compute-0 nova_compute[189016]: 2026-02-18 15:06:47.044 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:47 compute-0 nova_compute[189016]: 2026-02-18 15:06:47.326 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:47 compute-0 systemd-logind[831]: New session 29 of user zuul.
Feb 18 15:06:47 compute-0 systemd[1]: Started Session 29 of User zuul.
Feb 18 15:06:47 compute-0 nova_compute[189016]: 2026-02-18 15:06:47.669 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:48 compute-0 python3[248028]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 15:06:49 compute-0 nova_compute[189016]: 2026-02-18 15:06:49.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:49 compute-0 nova_compute[189016]: 2026-02-18 15:06:49.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:06:49 compute-0 podman[248071]: 2026-02-18 15:06:49.790557585 +0000 UTC m=+0.095144502 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, name=ubi9, release=1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 15:06:49 compute-0 podman[248070]: 2026-02-18 15:06:49.800185986 +0000 UTC m=+0.104911276 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 18 15:06:49 compute-0 podman[248069]: 2026-02-18 15:06:49.8171113 +0000 UTC m=+0.115618675 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 18 15:06:49 compute-0 nova_compute[189016]: 2026-02-18 15:06:49.940 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:06:49 compute-0 nova_compute[189016]: 2026-02-18 15:06:49.941 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:06:49 compute-0 nova_compute[189016]: 2026-02-18 15:06:49.941 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.314 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updating instance_info_cache with network_info: [{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.333 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.334 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.335 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.335 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.336 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.336 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:51 compute-0 nova_compute[189016]: 2026-02-18 15:06:51.336 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:06:52 compute-0 ovn_controller[99062]: 2026-02-18T15:06:52Z|00057|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 18 15:06:52 compute-0 nova_compute[189016]: 2026-02-18 15:06:52.329 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:52 compute-0 nova_compute[189016]: 2026-02-18 15:06:52.671 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.291 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.291 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.292 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.292 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.429 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.526 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.527 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.579 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.580 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.643 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.645 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.708 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.718 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.778 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.779 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.851 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.853 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.933 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.935 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:06:54 compute-0 nova_compute[189016]: 2026-02-18 15:06:54.990 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.302 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.304 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4889MB free_disk=72.22421264648438GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.304 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.305 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.509 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.510 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.510 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.511 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.589 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.608 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.631 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:06:55 compute-0 nova_compute[189016]: 2026-02-18 15:06:55.631 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:06:57 compute-0 nova_compute[189016]: 2026-02-18 15:06:57.332 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:57 compute-0 nova_compute[189016]: 2026-02-18 15:06:57.626 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:57 compute-0 nova_compute[189016]: 2026-02-18 15:06:57.645 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:57 compute-0 nova_compute[189016]: 2026-02-18 15:06:57.673 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:06:58 compute-0 podman[248153]: 2026-02-18 15:06:58.77973323 +0000 UTC m=+0.093346627 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 15:06:58 compute-0 podman[248152]: 2026-02-18 15:06:58.799949666 +0000 UTC m=+0.106866285 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:06:59 compute-0 nova_compute[189016]: 2026-02-18 15:06:59.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:06:59 compute-0 podman[204930]: time="2026-02-18T15:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:06:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:06:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Feb 18 15:07:01 compute-0 openstack_network_exporter[208107]: ERROR   15:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:07:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:07:01 compute-0 openstack_network_exporter[208107]: ERROR   15:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:07:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:07:01 compute-0 podman[248195]: 2026-02-18 15:07:01.768797165 +0000 UTC m=+0.089707466 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 18 15:07:02 compute-0 nova_compute[189016]: 2026-02-18 15:07:02.335 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:02 compute-0 nova_compute[189016]: 2026-02-18 15:07:02.675 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.432 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.433 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.454 189020 DEBUG nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.535 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.536 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.543 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.543 189020 INFO nova.compute.claims [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.680 189020 DEBUG nova.compute.provider_tree [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.694 189020 DEBUG nova.scheduler.client.report [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.723 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.724 189020 DEBUG nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.767 189020 DEBUG nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.785 189020 INFO nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.819 189020 DEBUG nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.895 189020 DEBUG nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.897 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.897 189020 INFO nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Creating image(s)#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.898 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.898 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.899 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.900 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "29f77961f2218a174308916f40fb40069e544f1e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:03 compute-0 nova_compute[189016]: 2026-02-18 15:07:03.900 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "29f77961f2218a174308916f40fb40069e544f1e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:04 compute-0 nova_compute[189016]: 2026-02-18 15:07:04.888 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:04 compute-0 nova_compute[189016]: 2026-02-18 15:07:04.962 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.part --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:04 compute-0 nova_compute[189016]: 2026-02-18 15:07:04.963 189020 DEBUG nova.virt.images [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] 51e2e472-84ec-4e87-a46f-1f4cbc90b5e2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Feb 18 15:07:04 compute-0 nova_compute[189016]: 2026-02-18 15:07:04.967 189020 DEBUG nova.privsep.utils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb 18 15:07:04 compute-0 nova_compute[189016]: 2026-02-18 15:07:04.968 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.part /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.154 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.part /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.converted" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.156 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.204 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e.converted --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.206 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "29f77961f2218a174308916f40fb40069e544f1e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.229 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.315 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.316 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "29f77961f2218a174308916f40fb40069e544f1e" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.317 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "29f77961f2218a174308916f40fb40069e544f1e" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.329 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.417 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.419 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e,backing_fmt=raw /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.534 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e,backing_fmt=raw /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk 1073741824" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.535 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "29f77961f2218a174308916f40fb40069e544f1e" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.536 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.599 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/29f77961f2218a174308916f40fb40069e544f1e --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.600 189020 DEBUG nova.virt.disk.api [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Checking if we can resize image /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.601 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.670 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.671 189020 DEBUG nova.virt.disk.api [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Cannot resize image /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.672 189020 DEBUG nova.objects.instance [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'migration_context' on Instance uuid 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.759 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.760 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.761 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.780 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.847 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.848 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.849 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.866 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.916 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.917 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.947 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.eph0 1073741824" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.949 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:05 compute-0 nova_compute[189016]: 2026-02-18 15:07:05.950 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.024 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.025 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.025 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Ensure instance console log exists: /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.026 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.026 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.027 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.029 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T15:06:52Z,direct_url=<?>,disk_format='qcow2',id=51e2e472-84ec-4e87-a46f-1f4cbc90b5e2,min_disk=0,min_ram=0,name='fvt_testing_image',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T15:06:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '51e2e472-84ec-4e87-a46f-1f4cbc90b5e2'}], 'ephemerals': [{'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 1, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.036 189020 WARNING nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.043 189020 DEBUG nova.virt.libvirt.host [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.044 189020 DEBUG nova.virt.libvirt.host [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.048 189020 DEBUG nova.virt.libvirt.host [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.049 189020 DEBUG nova.virt.libvirt.host [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.050 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.050 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T15:06:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='d56525db-9cb7-4551-91c6-1845abb1ce10',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-02-18T15:06:52Z,direct_url=<?>,disk_format='qcow2',id=51e2e472-84ec-4e87-a46f-1f4cbc90b5e2,min_disk=0,min_ram=0,name='fvt_testing_image',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-02-18T15:06:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.051 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.051 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.051 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.052 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.052 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.053 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.053 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.053 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.054 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.054 189020 DEBUG nova.virt.hardware [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.058 189020 DEBUG nova.objects.instance [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.075 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <uuid>75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5</uuid>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <name>instance-00000005</name>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <memory>524288</memory>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <nova:name>fvt_testing_server</nova:name>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:07:06</nova:creationTime>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <nova:flavor name="fvt_testing_flavor">
Feb 18 15:07:06 compute-0 nova_compute[189016]:        <nova:memory>512</nova:memory>
Feb 18 15:07:06 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:07:06 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:07:06 compute-0 nova_compute[189016]:        <nova:ephemeral>1</nova:ephemeral>
Feb 18 15:07:06 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:07:06 compute-0 nova_compute[189016]:        <nova:user uuid="387d978e2b494e88ad13abae2a83321d">admin</nova:user>
Feb 18 15:07:06 compute-0 nova_compute[189016]:        <nova:project uuid="71c6c5d63b07447388ace322f081ffc3">admin</nova:project>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="51e2e472-84ec-4e87-a46f-1f4cbc90b5e2"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <nova:ports/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <system>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <entry name="serial">75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5</entry>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <entry name="uuid">75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5</entry>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </system>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <os>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  </os>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <features>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  </features>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.eph0"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <target dev="vdb" bus="virtio"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.config"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/console.log" append="off"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <video>
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </video>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:07:06 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:07:06 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:07:06 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:07:06 compute-0 nova_compute[189016]: </domain>
Feb 18 15:07:06 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.126 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.127 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.128 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.129 189020 INFO nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Using config drive#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.242 189020 INFO nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Creating config drive at /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.config#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.247 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjk9amyi9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.369 189020 DEBUG oslo_concurrency.processutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjk9amyi9" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:06 compute-0 systemd-machined[158361]: New machine qemu-5-instance-00000005.
Feb 18 15:07:06 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.772 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427226.771597, 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.772 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.776 189020 DEBUG nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.776 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.781 189020 INFO nova.virt.libvirt.driver [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Instance spawned successfully.#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.781 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.798 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.806 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.810 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.810 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.811 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.811 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.812 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.812 189020 DEBUG nova.virt.libvirt.driver [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.836 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.836 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427226.7760644, 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.837 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] VM Started (Lifecycle Event)#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.860 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.865 189020 INFO nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Took 2.97 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.865 189020 DEBUG nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.867 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.907 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.949 189020 INFO nova.compute.manager [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Took 3.44 seconds to build instance.#033[00m
Feb 18 15:07:06 compute-0 nova_compute[189016]: 2026-02-18 15:07:06.972 189020 DEBUG oslo_concurrency.lockutils [None req-113877da-8de1-4b41-9ec0-f11951dc8fa7 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 3.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:07 compute-0 nova_compute[189016]: 2026-02-18 15:07:07.336 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:07 compute-0 nova_compute[189016]: 2026-02-18 15:07:07.677 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:08 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 18 15:07:08 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 18 15:07:12 compute-0 nova_compute[189016]: 2026-02-18 15:07:12.339 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:12 compute-0 nova_compute[189016]: 2026-02-18 15:07:12.680 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:14 compute-0 podman[248311]: 2026-02-18 15:07:14.756993986 +0000 UTC m=+0.070975986 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:07:14 compute-0 podman[248312]: 2026-02-18 15:07:14.774736597 +0000 UTC m=+0.086129662 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter)
Feb 18 15:07:17 compute-0 nova_compute[189016]: 2026-02-18 15:07:17.341 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:17 compute-0 nova_compute[189016]: 2026-02-18 15:07:17.682 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:20 compute-0 podman[248358]: 2026-02-18 15:07:20.753183836 +0000 UTC m=+0.071252593 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 18 15:07:20 compute-0 podman[248359]: 2026-02-18 15:07:20.772850405 +0000 UTC m=+0.092890611 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler)
Feb 18 15:07:20 compute-0 podman[248357]: 2026-02-18 15:07:20.785647553 +0000 UTC m=+0.093765112 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.034 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.035 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.036 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.036 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.036 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.041 189020 INFO nova.compute.manager [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Terminating instance#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.043 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "refresh_cache-75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.043 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquired lock "refresh_cache-75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.043 189020 DEBUG nova.network.neutron [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:07:21 compute-0 nova_compute[189016]: 2026-02-18 15:07:21.867 189020 DEBUG nova.network.neutron [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:07:22 compute-0 nova_compute[189016]: 2026-02-18 15:07:22.345 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:22 compute-0 nova_compute[189016]: 2026-02-18 15:07:22.685 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:22 compute-0 nova_compute[189016]: 2026-02-18 15:07:22.916 189020 DEBUG nova.network.neutron [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:07:22 compute-0 nova_compute[189016]: 2026-02-18 15:07:22.931 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Releasing lock "refresh_cache-75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:07:22 compute-0 nova_compute[189016]: 2026-02-18 15:07:22.932 189020 DEBUG nova.compute.manager [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 18 15:07:22 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb 18 15:07:22 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.617s CPU time.
Feb 18 15:07:22 compute-0 systemd-machined[158361]: Machine qemu-5-instance-00000005 terminated.
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.180 189020 INFO nova.virt.libvirt.driver [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Instance destroyed successfully.#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.182 189020 DEBUG nova.objects.instance [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'resources' on Instance uuid 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.195 189020 INFO nova.virt.libvirt.driver [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Deleting instance files /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5_del#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.196 189020 INFO nova.virt.libvirt.driver [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Deletion of /var/lib/nova/instances/75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5_del complete#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.246 189020 INFO nova.compute.manager [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Took 0.31 seconds to destroy the instance on the hypervisor.#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.248 189020 DEBUG oslo.service.loopingcall [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.249 189020 DEBUG nova.compute.manager [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.249 189020 DEBUG nova.network.neutron [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.849 189020 DEBUG nova.network.neutron [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.861 189020 DEBUG nova.network.neutron [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.875 189020 INFO nova.compute.manager [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Took 0.63 seconds to deallocate network for instance.#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.913 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:23 compute-0 nova_compute[189016]: 2026-02-18 15:07:23.914 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:24 compute-0 nova_compute[189016]: 2026-02-18 15:07:24.006 189020 DEBUG nova.compute.provider_tree [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:07:24 compute-0 nova_compute[189016]: 2026-02-18 15:07:24.044 189020 DEBUG nova.scheduler.client.report [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:07:24 compute-0 nova_compute[189016]: 2026-02-18 15:07:24.082 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:24 compute-0 nova_compute[189016]: 2026-02-18 15:07:24.122 189020 INFO nova.scheduler.client.report [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Deleted allocations for instance 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5#033[00m
Feb 18 15:07:24 compute-0 nova_compute[189016]: 2026-02-18 15:07:24.206 189020 DEBUG oslo_concurrency.lockutils [None req-e98bf4a5-94a3-4de9-8881-ce1d461bb82c 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.196 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.198 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.198 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.212 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.217 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d11e815-9fde-4624-9556-a726f1b266ba', 'name': 'vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.218 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.219 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T15:07:25.219827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.293 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.295 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.295 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.379 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 693939868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.380 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 100498264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.381 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 81129348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.381 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.382 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.382 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.382 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.383 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.383 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.383 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.384 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T15:07:25.383438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.384 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.385 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.385 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.385 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.386 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.386 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.387 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.387 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.387 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.388 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.388 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T15:07:25.387920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.388 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.389 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.389 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.390 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.390 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.391 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.391 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.391 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.392 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.392 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.392 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T15:07:25.392644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.417 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.419 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.419 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.442 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.442 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.443 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.443 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.443 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.444 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.444 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.444 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.444 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.445 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T15:07:25.444706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.445 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.446 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.446 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.446 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.447 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.447 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.447 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.447 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.448 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.448 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.448 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.449 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T15:07:25.448599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.449 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.449 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.450 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.450 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.450 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.451 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.451 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.451 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.451 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.452 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.452 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T15:07:25.452230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.457 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.463 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.463 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.464 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.464 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.464 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.464 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.464 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T15:07:25.464905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.485 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 40950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.510 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/cpu volume: 36940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.511 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.511 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.511 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.511 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.511 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.511 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.512 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.512 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.512 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.512 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 42330684447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.512 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 18712803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.513 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T15:07:25.511906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.513 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.514 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.514 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.514 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.514 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.514 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.514 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.514 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.515 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.515 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.515 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.515 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.516 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T15:07:25.514387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.516 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.516 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T15:07:25.516230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.516 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T15:07:25.517504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.518 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.518 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.518 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.518 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.519 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.519 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.519 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.519 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.520 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.520 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.520 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.520 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.520 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.520 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.521 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.521 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.521 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.521 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.521 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T15:07:25.520194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T15:07:25.521760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.522 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.523 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T15:07:25.522735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.523 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.523 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.523 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T15:07:25.524174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.524 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T15:07:25.525214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.525 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.526 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.526 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.526 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.526 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T15:07:25.526099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.526 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.527 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T15:07:25.527444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.528 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T15:07:25.528828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.529 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.530 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T15:07:25.529893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.530 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.530 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.530 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.530 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.531 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.531 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.531 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.531 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.531 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.532 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.532 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.532 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.532 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T15:07:25.532073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.532 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.533 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.533 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.533 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.533 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.533 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.533 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.533 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T15:07:25.533451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.534 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.534 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.534 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.534 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.534 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.534 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.535 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T15:07:25.534752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.535 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.535 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.535 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.535 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.536 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:07:25.537 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:07:27 compute-0 nova_compute[189016]: 2026-02-18 15:07:27.348 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:27 compute-0 nova_compute[189016]: 2026-02-18 15:07:27.687 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:29 compute-0 podman[248425]: 2026-02-18 15:07:29.745955623 +0000 UTC m=+0.062682510 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 15:07:29 compute-0 podman[204930]: time="2026-02-18T15:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:07:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:07:29 compute-0 podman[248424]: 2026-02-18 15:07:29.807067272 +0000 UTC m=+0.123794719 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 18 15:07:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Feb 18 15:07:31 compute-0 openstack_network_exporter[208107]: ERROR   15:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:07:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:07:31 compute-0 openstack_network_exporter[208107]: ERROR   15:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:07:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:07:32 compute-0 nova_compute[189016]: 2026-02-18 15:07:32.351 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:32 compute-0 nova_compute[189016]: 2026-02-18 15:07:32.689 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:32 compute-0 podman[248465]: 2026-02-18 15:07:32.862037922 +0000 UTC m=+0.161412573 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 18 15:07:37 compute-0 nova_compute[189016]: 2026-02-18 15:07:37.352 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:37 compute-0 nova_compute[189016]: 2026-02-18 15:07:37.692 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:38 compute-0 nova_compute[189016]: 2026-02-18 15:07:38.179 189020 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771427243.176655, 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:07:38 compute-0 nova_compute[189016]: 2026-02-18 15:07:38.181 189020 INFO nova.compute.manager [-] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] VM Stopped (Lifecycle Event)#033[00m
Feb 18 15:07:38 compute-0 nova_compute[189016]: 2026-02-18 15:07:38.206 189020 DEBUG nova.compute.manager [None req-9c4eec5d-b5fe-4f1d-aa18-79d758e7622e - - - - - -] [instance: 75aaf7e4-228c-4bc6-8c47-f8d4c86ff0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:07:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:07:41.448 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:07:41.450 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:07:41.452 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:42 compute-0 nova_compute[189016]: 2026-02-18 15:07:42.355 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:42 compute-0 nova_compute[189016]: 2026-02-18 15:07:42.694 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:45 compute-0 podman[248492]: 2026-02-18 15:07:45.77545967 +0000 UTC m=+0.074666167 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:07:45 compute-0 podman[248493]: 2026-02-18 15:07:45.777479911 +0000 UTC m=+0.079861427 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Feb 18 15:07:47 compute-0 nova_compute[189016]: 2026-02-18 15:07:47.358 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:47 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Feb 18 15:07:47 compute-0 systemd-logind[831]: Session 29 logged out. Waiting for processes to exit.
Feb 18 15:07:47 compute-0 systemd-logind[831]: Removed session 29.
Feb 18 15:07:47 compute-0 nova_compute[189016]: 2026-02-18 15:07:47.697 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:48 compute-0 nova_compute[189016]: 2026-02-18 15:07:48.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:50 compute-0 nova_compute[189016]: 2026-02-18 15:07:50.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:50 compute-0 nova_compute[189016]: 2026-02-18 15:07:50.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:07:50 compute-0 nova_compute[189016]: 2026-02-18 15:07:50.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:07:50 compute-0 nova_compute[189016]: 2026-02-18 15:07:50.878 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:07:50 compute-0 nova_compute[189016]: 2026-02-18 15:07:50.879 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:07:50 compute-0 nova_compute[189016]: 2026-02-18 15:07:50.879 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:07:50 compute-0 nova_compute[189016]: 2026-02-18 15:07:50.879 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:07:51 compute-0 podman[248535]: 2026-02-18 15:07:51.78180033 +0000 UTC m=+0.081072647 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, config_id=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 18 15:07:51 compute-0 podman[248533]: 2026-02-18 15:07:51.792443175 +0000 UTC m=+0.108673704 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 18 15:07:51 compute-0 podman[248534]: 2026-02-18 15:07:51.816017561 +0000 UTC m=+0.122932938 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.361 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.700 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.919 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.941 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.942 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.942 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.942 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.943 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.943 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:52 compute-0 nova_compute[189016]: 2026-02-18 15:07:52.943 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.079 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.081 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.202 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.262 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.263 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.314 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.315 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.372 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.373 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.428 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.436 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.491 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.492 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.552 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.553 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.606 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.607 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:07:55 compute-0 nova_compute[189016]: 2026-02-18 15:07:55.665 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.010 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.012 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4902MB free_disk=72.19685363769531GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.012 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.012 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:07:56 compute-0 systemd-logind[831]: New session 30 of user zuul.
Feb 18 15:07:56 compute-0 systemd[1]: Started Session 30 of User zuul.
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.106 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.106 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.107 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.107 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.131 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing inventories for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.149 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating ProviderTree inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.149 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.163 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing aggregate associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.181 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing trait associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, traits: HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.236 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.255 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.282 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:07:56 compute-0 nova_compute[189016]: 2026-02-18 15:07:56.282 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:07:56 compute-0 python3[248795]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 15:07:57 compute-0 nova_compute[189016]: 2026-02-18 15:07:57.363 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:57 compute-0 nova_compute[189016]: 2026-02-18 15:07:57.702 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:07:58 compute-0 nova_compute[189016]: 2026-02-18 15:07:58.283 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:07:59 compute-0 podman[204930]: time="2026-02-18T15:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:07:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:07:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Feb 18 15:08:00 compute-0 podman[248835]: 2026-02-18 15:08:00.739802694 +0000 UTC m=+0.063960432 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 18 15:08:00 compute-0 podman[248836]: 2026-02-18 15:08:00.743406913 +0000 UTC m=+0.061789217 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:08:01 compute-0 nova_compute[189016]: 2026-02-18 15:08:01.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:01 compute-0 openstack_network_exporter[208107]: ERROR   15:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:08:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:08:01 compute-0 openstack_network_exporter[208107]: ERROR   15:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:08:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:08:02 compute-0 nova_compute[189016]: 2026-02-18 15:08:02.365 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:02 compute-0 nova_compute[189016]: 2026-02-18 15:08:02.704 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:03 compute-0 podman[249026]: 2026-02-18 15:08:03.70139235 +0000 UTC m=+0.085746793 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 18 15:08:03 compute-0 python3[249074]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 15:08:07 compute-0 nova_compute[189016]: 2026-02-18 15:08:07.369 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:07 compute-0 nova_compute[189016]: 2026-02-18 15:08:07.706 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:12 compute-0 nova_compute[189016]: 2026-02-18 15:08:12.372 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:12 compute-0 nova_compute[189016]: 2026-02-18 15:08:12.708 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:12 compute-0 python3[249297]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 15:08:16 compute-0 podman[249336]: 2026-02-18 15:08:16.774665398 +0000 UTC m=+0.104506569 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 18 15:08:16 compute-0 podman[249337]: 2026-02-18 15:08:16.79644116 +0000 UTC m=+0.111644867 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1770267347, version=9.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Feb 18 15:08:17 compute-0 nova_compute[189016]: 2026-02-18 15:08:17.375 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:17 compute-0 nova_compute[189016]: 2026-02-18 15:08:17.710 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:22 compute-0 nova_compute[189016]: 2026-02-18 15:08:22.377 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:22 compute-0 nova_compute[189016]: 2026-02-18 15:08:22.712 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:22 compute-0 podman[249383]: 2026-02-18 15:08:22.758070073 +0000 UTC m=+0.077320694 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 18 15:08:22 compute-0 podman[249384]: 2026-02-18 15:08:22.764534944 +0000 UTC m=+0.077927839 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, config_id=kepler, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 15:08:22 compute-0 podman[249382]: 2026-02-18 15:08:22.773692531 +0000 UTC m=+0.094339967 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 18 15:08:27 compute-0 nova_compute[189016]: 2026-02-18 15:08:27.380 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:27 compute-0 python3[249614]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 18 15:08:27 compute-0 nova_compute[189016]: 2026-02-18 15:08:27.715 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:29 compute-0 podman[204930]: time="2026-02-18T15:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:08:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:08:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Feb 18 15:08:31 compute-0 openstack_network_exporter[208107]: ERROR   15:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:08:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:08:31 compute-0 openstack_network_exporter[208107]: ERROR   15:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:08:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:08:31 compute-0 podman[249655]: 2026-02-18 15:08:31.746843467 +0000 UTC m=+0.072537345 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 15:08:31 compute-0 podman[249654]: 2026-02-18 15:08:31.761281686 +0000 UTC m=+0.086118942 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 18 15:08:32 compute-0 nova_compute[189016]: 2026-02-18 15:08:32.383 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:32 compute-0 nova_compute[189016]: 2026-02-18 15:08:32.718 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:34 compute-0 podman[249695]: 2026-02-18 15:08:34.78792528 +0000 UTC m=+0.117856481 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 18 15:08:37 compute-0 nova_compute[189016]: 2026-02-18 15:08:37.384 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:37 compute-0 nova_compute[189016]: 2026-02-18 15:08:37.720 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:08:41.448 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:08:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:08:41.449 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:08:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:08:41.450 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:08:42 compute-0 nova_compute[189016]: 2026-02-18 15:08:42.386 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:42 compute-0 nova_compute[189016]: 2026-02-18 15:08:42.722 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:47 compute-0 podman[249724]: 2026-02-18 15:08:47.225048539 +0000 UTC m=+0.067976301 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, container_name=openstack_network_exporter)
Feb 18 15:08:47 compute-0 podman[249723]: 2026-02-18 15:08:47.246904043 +0000 UTC m=+0.089072116 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 15:08:47 compute-0 nova_compute[189016]: 2026-02-18 15:08:47.388 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:47 compute-0 nova_compute[189016]: 2026-02-18 15:08:47.725 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:50 compute-0 nova_compute[189016]: 2026-02-18 15:08:50.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:50 compute-0 nova_compute[189016]: 2026-02-18 15:08:50.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:50 compute-0 nova_compute[189016]: 2026-02-18 15:08:50.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:08:51 compute-0 nova_compute[189016]: 2026-02-18 15:08:51.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:52 compute-0 nova_compute[189016]: 2026-02-18 15:08:52.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:52 compute-0 nova_compute[189016]: 2026-02-18 15:08:52.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:08:52 compute-0 nova_compute[189016]: 2026-02-18 15:08:52.392 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:52 compute-0 nova_compute[189016]: 2026-02-18 15:08:52.728 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:52 compute-0 nova_compute[189016]: 2026-02-18 15:08:52.874 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:08:52 compute-0 nova_compute[189016]: 2026-02-18 15:08:52.875 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:08:52 compute-0 nova_compute[189016]: 2026-02-18 15:08:52.875 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:08:53 compute-0 podman[249769]: 2026-02-18 15:08:53.761772611 +0000 UTC m=+0.077600651 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Feb 18 15:08:53 compute-0 podman[249768]: 2026-02-18 15:08:53.766809696 +0000 UTC m=+0.086495872 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 18 15:08:53 compute-0 podman[249770]: 2026-02-18 15:08:53.786733671 +0000 UTC m=+0.098219033 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_id=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 18 15:08:54 compute-0 nova_compute[189016]: 2026-02-18 15:08:54.907 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updating instance_info_cache with network_info: [{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:08:54 compute-0 nova_compute[189016]: 2026-02-18 15:08:54.925 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:08:54 compute-0 nova_compute[189016]: 2026-02-18 15:08:54.925 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:08:54 compute-0 nova_compute[189016]: 2026-02-18 15:08:54.926 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:54 compute-0 nova_compute[189016]: 2026-02-18 15:08:54.926 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.079 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.079 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.080 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.080 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.163 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.225 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.226 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.288 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.291 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.340 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.343 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.407 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.415 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.465 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.466 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.516 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.517 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.569 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.570 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.626 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.921 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.922 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4893MB free_disk=72.19570541381836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.923 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:08:55 compute-0 nova_compute[189016]: 2026-02-18 15:08:55.923 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.007 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.008 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.008 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.009 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.085 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.105 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.110 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:08:56 compute-0 nova_compute[189016]: 2026-02-18 15:08:56.110 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:08:57 compute-0 nova_compute[189016]: 2026-02-18 15:08:57.396 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:57 compute-0 nova_compute[189016]: 2026-02-18 15:08:57.732 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:08:58 compute-0 nova_compute[189016]: 2026-02-18 15:08:58.106 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:58 compute-0 nova_compute[189016]: 2026-02-18 15:08:58.152 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:08:59 compute-0 podman[204930]: time="2026-02-18T15:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:08:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:08:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Feb 18 15:09:01 compute-0 openstack_network_exporter[208107]: ERROR   15:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:09:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:09:01 compute-0 openstack_network_exporter[208107]: ERROR   15:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:09:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:09:02 compute-0 nova_compute[189016]: 2026-02-18 15:09:02.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:02 compute-0 nova_compute[189016]: 2026-02-18 15:09:02.407 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:02 compute-0 nova_compute[189016]: 2026-02-18 15:09:02.734 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:02 compute-0 podman[249847]: 2026-02-18 15:09:02.767604471 +0000 UTC m=+0.073043177 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 18 15:09:02 compute-0 podman[249848]: 2026-02-18 15:09:02.783106307 +0000 UTC m=+0.092488021 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:09:05 compute-0 podman[249889]: 2026-02-18 15:09:05.804046918 +0000 UTC m=+0.121994575 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:09:07 compute-0 nova_compute[189016]: 2026-02-18 15:09:07.410 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:07 compute-0 nova_compute[189016]: 2026-02-18 15:09:07.737 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:08 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 18 15:09:12 compute-0 nova_compute[189016]: 2026-02-18 15:09:12.412 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:12 compute-0 nova_compute[189016]: 2026-02-18 15:09:12.741 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:17 compute-0 nova_compute[189016]: 2026-02-18 15:09:17.415 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:17 compute-0 nova_compute[189016]: 2026-02-18 15:09:17.745 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:17 compute-0 podman[249917]: 2026-02-18 15:09:17.753900756 +0000 UTC m=+0.071182251 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 15:09:17 compute-0 podman[249918]: 2026-02-18 15:09:17.765299749 +0000 UTC m=+0.080591735 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, version=9.7, architecture=x86_64, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 18 15:09:22 compute-0 nova_compute[189016]: 2026-02-18 15:09:22.418 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:22 compute-0 nova_compute[189016]: 2026-02-18 15:09:22.747 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:24 compute-0 podman[249962]: 2026-02-18 15:09:24.750401509 +0000 UTC m=+0.064241355 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 18 15:09:24 compute-0 podman[249963]: 2026-02-18 15:09:24.781553692 +0000 UTC m=+0.086724673 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi)
Feb 18 15:09:24 compute-0 podman[249964]: 2026-02-18 15:09:24.78671668 +0000 UTC m=+0.087035100 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, config_id=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.198 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.201 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.213 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'name': 'test_0', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.219 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5d11e815-9fde-4624-9556-a726f1b266ba', 'name': 'vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss', 'flavor': {'id': '23e98520-0527-4596-8420-5ff1feeb3155', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '7cc2a96a-1e6c-474d-b671-0e2626bf4158'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '71c6c5d63b07447388ace322f081ffc3', 'user_id': '387d978e2b494e88ad13abae2a83321d', 'hostId': '446fc215b1d696afac3bb8f64779801bc00675e2792c377a3578cecd', 'status': 'active', 'metadata': {'metering.server_group': '449d1667-0173-4809-b0e3-b50e27381afa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.220 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.221 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.221 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.221 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T15:09:25.221657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.301 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 698777971 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.302 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 118502891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.302 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.latency volume: 69122624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.375 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 693939868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.376 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 100498264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.376 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.latency volume: 81129348 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.377 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.377 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.378 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.378 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.378 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.378 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.378 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.378 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.379 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.379 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.379 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.380 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.380 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.380 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.381 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.381 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T15:09:25.378392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.381 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.381 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.381 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.381 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.382 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.382 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.382 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.382 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.383 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.383 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T15:09:25.381581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.384 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.385 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.385 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.386 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.386 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T15:09:25.386104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.414 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.415 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.415 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.436 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.437 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.437 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.437 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.437 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.438 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.438 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.438 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.438 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.438 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.438 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.439 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T15:09:25.438334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.439 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.439 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.439 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.439 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.440 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.441 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.441 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.441 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.441 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.442 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.443 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.443 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.443 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T15:09:25.440385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.443 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T15:09:25.443458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.448 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.453 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.453 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.454 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.454 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.454 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.454 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.454 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T15:09:25.454377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.477 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/cpu volume: 42220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.502 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/cpu volume: 38210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.503 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.504 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.504 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.504 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.504 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.505 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 2529045248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.505 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 11626356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T15:09:25.504516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.505 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.505 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 42330684447 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.506 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 18712803 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.506 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.506 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.506 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.507 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.507 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.507 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.507 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.507 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.507 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.508 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.508 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.508 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.508 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.508 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.509 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.509 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.509 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T15:09:25.507329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T15:09:25.508938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.510 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T15:09:25.510792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.511 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.511 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.511 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.512 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.512 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.512 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.512 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.512 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.513 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.513 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.513 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.513 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.513 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.514 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.514 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.514 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.514 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T15:09:25.513305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.514 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.515 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.515 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T15:09:25.515026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.515 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.515 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.515 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.516 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.516 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.516 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T15:09:25.516182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.516 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.516 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.517 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.517 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.517 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.517 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.517 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.517 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.517 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T15:09:25.517545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.518 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T15:09:25.518786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.519 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.519 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.519 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.519 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.519 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.519 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.520 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T15:09:25.519797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.520 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.520 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.520 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.520 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.521 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.521 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.521 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.521 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.521 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.521 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.521 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T15:09:25.521324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.522 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/memory.usage volume: 48.734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T15:09:25.522659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.523 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.524 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T15:09:25.523868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.524 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.524 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.524 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.525 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.525 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.525 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.525 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.525 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.525 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.525 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.526 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.526 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.526 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.526 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.526 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.526 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T15:09:25.526045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.527 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.527 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.527 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.527 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.527 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T15:09:25.527246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 DEBUG ceilometer.compute.pollsters [-] debb3011-9258-4f04-9eb4-592cc56eb3eb/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.528 15 DEBUG ceilometer.compute.pollsters [-] 5d11e815-9fde-4624-9556-a726f1b266ba/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.529 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.529 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T15:09:25.528572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.529 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.530 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:09:25.531 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:09:27 compute-0 nova_compute[189016]: 2026-02-18 15:09:27.421 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:27 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Feb 18 15:09:27 compute-0 systemd[1]: session-30.scope: Consumed 3.304s CPU time.
Feb 18 15:09:27 compute-0 systemd-logind[831]: Session 30 logged out. Waiting for processes to exit.
Feb 18 15:09:27 compute-0 systemd-logind[831]: Removed session 30.
Feb 18 15:09:27 compute-0 nova_compute[189016]: 2026-02-18 15:09:27.751 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:29 compute-0 podman[204930]: time="2026-02-18T15:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:09:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:09:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Feb 18 15:09:31 compute-0 openstack_network_exporter[208107]: ERROR   15:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:09:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:09:31 compute-0 openstack_network_exporter[208107]: ERROR   15:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:09:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:09:32 compute-0 nova_compute[189016]: 2026-02-18 15:09:32.423 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:32 compute-0 nova_compute[189016]: 2026-02-18 15:09:32.756 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:33 compute-0 podman[250016]: 2026-02-18 15:09:33.756507383 +0000 UTC m=+0.079870033 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260216)
Feb 18 15:09:33 compute-0 podman[250017]: 2026-02-18 15:09:33.757542498 +0000 UTC m=+0.076900319 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:09:36 compute-0 podman[250059]: 2026-02-18 15:09:36.786639715 +0000 UTC m=+0.097731666 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 15:09:37 compute-0 nova_compute[189016]: 2026-02-18 15:09:37.426 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:37 compute-0 nova_compute[189016]: 2026-02-18 15:09:37.759 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:09:41.452 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:09:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:09:41.457 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:09:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:09:41.461 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:09:42 compute-0 nova_compute[189016]: 2026-02-18 15:09:42.427 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:42 compute-0 nova_compute[189016]: 2026-02-18 15:09:42.761 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:47 compute-0 nova_compute[189016]: 2026-02-18 15:09:47.429 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:47 compute-0 nova_compute[189016]: 2026-02-18 15:09:47.764 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:48 compute-0 podman[250084]: 2026-02-18 15:09:48.744708657 +0000 UTC m=+0.071537286 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:09:48 compute-0 podman[250085]: 2026-02-18 15:09:48.774096756 +0000 UTC m=+0.096844523 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1770267347, vendor=Red Hat, Inc., vcs-type=git, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.047 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.433 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.766 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.875 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.875 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.876 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:09:52 compute-0 nova_compute[189016]: 2026-02-18 15:09:52.876 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:09:55 compute-0 podman[250133]: 2026-02-18 15:09:55.782755768 +0000 UTC m=+0.096461364 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 15:09:55 compute-0 podman[250134]: 2026-02-18 15:09:55.803559184 +0000 UTC m=+0.115119137 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 18 15:09:55 compute-0 podman[250135]: 2026-02-18 15:09:55.804823645 +0000 UTC m=+0.109948288 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, maintainer=Red Hat, Inc.)
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.302 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [{"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.327 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-debb3011-9258-4f04-9eb4-592cc56eb3eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.328 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.328 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.329 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.329 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.330 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.330 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.330 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.355 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.356 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.357 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.357 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.435 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.459 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.533 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.534 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.585 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.586 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.648 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.649 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.696 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.706 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.755 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.757 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.776 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.824 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.825 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.880 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.881 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:09:57 compute-0 nova_compute[189016]: 2026-02-18 15:09:57.940 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.326 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.327 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4852MB free_disk=72.19573211669922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.328 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.328 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.708 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance debb3011-9258-4f04-9eb4-592cc56eb3eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.708 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 5d11e815-9fde-4624-9556-a726f1b266ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.708 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.709 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.907 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.934 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.936 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:09:58 compute-0 nova_compute[189016]: 2026-02-18 15:09:58.936 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:09:59 compute-0 podman[204930]: time="2026-02-18T15:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:09:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:09:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Feb 18 15:10:00 compute-0 nova_compute[189016]: 2026-02-18 15:10:00.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:00 compute-0 nova_compute[189016]: 2026-02-18 15:10:00.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:01 compute-0 openstack_network_exporter[208107]: ERROR   15:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:10:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:10:01 compute-0 openstack_network_exporter[208107]: ERROR   15:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:10:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:10:02 compute-0 nova_compute[189016]: 2026-02-18 15:10:02.438 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:02 compute-0 nova_compute[189016]: 2026-02-18 15:10:02.780 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:04 compute-0 nova_compute[189016]: 2026-02-18 15:10:04.077 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:04 compute-0 podman[250214]: 2026-02-18 15:10:04.735655123 +0000 UTC m=+0.058856641 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:10:04 compute-0 podman[250213]: 2026-02-18 15:10:04.740182546 +0000 UTC m=+0.066107252 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 18 15:10:07 compute-0 nova_compute[189016]: 2026-02-18 15:10:07.440 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:07 compute-0 podman[250253]: 2026-02-18 15:10:07.590906627 +0000 UTC m=+0.115393554 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:10:07 compute-0 nova_compute[189016]: 2026-02-18 15:10:07.782 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:08 compute-0 nova_compute[189016]: 2026-02-18 15:10:08.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:08 compute-0 nova_compute[189016]: 2026-02-18 15:10:08.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 18 15:10:12 compute-0 nova_compute[189016]: 2026-02-18 15:10:12.442 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:12 compute-0 nova_compute[189016]: 2026-02-18 15:10:12.786 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:14 compute-0 nova_compute[189016]: 2026-02-18 15:10:14.066 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:14 compute-0 nova_compute[189016]: 2026-02-18 15:10:14.067 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 18 15:10:14 compute-0 nova_compute[189016]: 2026-02-18 15:10:14.084 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 18 15:10:17 compute-0 nova_compute[189016]: 2026-02-18 15:10:17.443 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:17 compute-0 nova_compute[189016]: 2026-02-18 15:10:17.788 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:19 compute-0 podman[250280]: 2026-02-18 15:10:19.757015497 +0000 UTC m=+0.074632432 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 18 15:10:19 compute-0 podman[250281]: 2026-02-18 15:10:19.779573327 +0000 UTC m=+0.090989489 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 18 15:10:22 compute-0 nova_compute[189016]: 2026-02-18 15:10:22.446 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:22 compute-0 nova_compute[189016]: 2026-02-18 15:10:22.790 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:26 compute-0 podman[250326]: 2026-02-18 15:10:26.760434471 +0000 UTC m=+0.081365359 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:10:26 compute-0 podman[250325]: 2026-02-18 15:10:26.78056604 +0000 UTC m=+0.105652921 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 18 15:10:26 compute-0 podman[250327]: 2026-02-18 15:10:26.798893075 +0000 UTC m=+0.115312351 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, architecture=x86_64, release-0.7.12=, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 15:10:27 compute-0 nova_compute[189016]: 2026-02-18 15:10:27.448 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:27 compute-0 nova_compute[189016]: 2026-02-18 15:10:27.793 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.425 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.425 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.426 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.426 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.426 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.429 189020 INFO nova.compute.manager [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Terminating instance#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.430 189020 DEBUG nova.compute.manager [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 18 15:10:29 compute-0 kernel: tapfa58a88f-dd (unregistering): left promiscuous mode
Feb 18 15:10:29 compute-0 NetworkManager[57258]: <info>  [1771427429.4740] device (tapfa58a88f-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 18 15:10:29 compute-0 ovn_controller[99062]: 2026-02-18T15:10:29Z|00058|binding|INFO|Releasing lport fa58a88f-dd18-4a95-98c8-21e845485f69 from this chassis (sb_readonly=0)
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.481 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 ovn_controller[99062]: 2026-02-18T15:10:29Z|00059|binding|INFO|Setting lport fa58a88f-dd18-4a95-98c8-21e845485f69 down in Southbound
Feb 18 15:10:29 compute-0 ovn_controller[99062]: 2026-02-18T15:10:29Z|00060|binding|INFO|Removing iface tapfa58a88f-dd ovn-installed in OVS
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.487 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.491 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.496 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:22:e0 192.168.0.174'], port_security=['fa:16:3e:74:22:e0 192.168.0.174'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-g63ccmz2bh6c-dqrgqnefazks-iswfzet66xcu-port-kgtkkgtpiadt', 'neutron:cidrs': '192.168.0.174/24', 'neutron:device_id': '5d11e815-9fde-4624-9556-a726f1b266ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-g63ccmz2bh6c-dqrgqnefazks-iswfzet66xcu-port-kgtkkgtpiadt', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=fa58a88f-dd18-4a95-98c8-21e845485f69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.500 108400 INFO neutron.agent.ovn.metadata.agent [-] Port fa58a88f-dd18-4a95-98c8-21e845485f69 in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 unbound from our chassis#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.504 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c269c00a-f738-4cb6-ac67-09050c56f9f2#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.526 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[15748e32-2e8a-4949-ab32-fa117a0854ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:29 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb 18 15:10:29 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 38.251s CPU time.
Feb 18 15:10:29 compute-0 systemd-machined[158361]: Machine qemu-4-instance-00000004 terminated.
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.559 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d671a7-1dcc-42ff-8656-3028285f85e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.564 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[c3591e1b-8ecd-4a92-bc0f-0d0508690612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.597 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[86e60e56-e951-486c-8bbe-c9aa3d5d01cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.615 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[761b3324-0d5a-4920-a3be-02dfa69029ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc269c00a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:4d:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346647, 'reachable_time': 36248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250391, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.632 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b87e8c18-bddc-4832-944f-20480d6f5abc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346658, 'tstamp': 346658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250392, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc269c00a-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 346660, 'tstamp': 346660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250392, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.636 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.639 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.644 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.647 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc269c00a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.647 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.648 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc269c00a-f0, col_values=(('external_ids', {'iface-id': '7e592dc1-2432-46dc-b338-f9a04aad5932'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:10:29 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:29.649 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.657 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.662 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.720 189020 INFO nova.virt.libvirt.driver [-] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Instance destroyed successfully.#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.721 189020 DEBUG nova.objects.instance [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'resources' on Instance uuid 5d11e815-9fde-4624-9556-a726f1b266ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.735 189020 DEBUG nova.virt.libvirt.vif [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:00:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mz2bh6c-dqrgqnefazks-iswfzet66xcu-vnf-dzerxjls3fss',id=4,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-18T15:00:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='449d1667-0173-4809-b0e3-b50e27381afa'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-lwioh85z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T15:00:36Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTY1NDMxMzI4MzgzMzI4MzA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Feb 18 15:10:29 compute-0 nova_compute[189016]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTY1NDMxMzI4MzgzMzI4MzA2OT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU2NTQzMTMyODM4MzMyODMwNjk9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NjU0MzEzMjgzODMzMjgzMDY5PT0tLQo=',user_id='387d978e2b494e88ad13abae2a83321d',uuid=5d11e815-9fde-4624-9556-a726f1b266ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.735 189020 DEBUG nova.network.os_vif_util [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.736 189020 DEBUG nova.network.os_vif_util [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:22:e0,bridge_name='br-int',has_traffic_filtering=True,id=fa58a88f-dd18-4a95-98c8-21e845485f69,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfa58a88f-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.737 189020 DEBUG os_vif [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:22:e0,bridge_name='br-int',has_traffic_filtering=True,id=fa58a88f-dd18-4a95-98c8-21e845485f69,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfa58a88f-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.740 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.740 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa58a88f-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.742 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.744 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:29 compute-0 podman[204930]: time="2026-02-18T15:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.749 189020 INFO os_vif [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:22:e0,bridge_name='br-int',has_traffic_filtering=True,id=fa58a88f-dd18-4a95-98c8-21e845485f69,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfa58a88f-dd')#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.750 189020 INFO nova.virt.libvirt.driver [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Deleting instance files /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba_del#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.751 189020 INFO nova.virt.libvirt.driver [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Deletion of /var/lib/nova/instances/5d11e815-9fde-4624-9556-a726f1b266ba_del complete#033[00m
Feb 18 15:10:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:10:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.868 189020 INFO nova.compute.manager [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.869 189020 DEBUG oslo.service.loopingcall [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.869 189020 DEBUG nova.compute.manager [-] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 18 15:10:29 compute-0 nova_compute[189016]: 2026-02-18 15:10:29.870 189020 DEBUG nova.network.neutron [-] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 18 15:10:30 compute-0 rsyslogd[239561]: message too long (8192) with configured size 8096, begin of message is: 2026-02-18 15:10:29.735 189020 DEBUG nova.virt.libvirt.vif [None req-dd5dbf71-cb [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.452 189020 DEBUG nova.compute.manager [req-5d776e59-95b3-4534-9644-1beec4b62f2e req-88f779b8-58d2-4d4a-a25c-12716701f454 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received event network-vif-unplugged-fa58a88f-dd18-4a95-98c8-21e845485f69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.453 189020 DEBUG oslo_concurrency.lockutils [req-5d776e59-95b3-4534-9644-1beec4b62f2e req-88f779b8-58d2-4d4a-a25c-12716701f454 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.453 189020 DEBUG oslo_concurrency.lockutils [req-5d776e59-95b3-4534-9644-1beec4b62f2e req-88f779b8-58d2-4d4a-a25c-12716701f454 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.453 189020 DEBUG oslo_concurrency.lockutils [req-5d776e59-95b3-4534-9644-1beec4b62f2e req-88f779b8-58d2-4d4a-a25c-12716701f454 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.454 189020 DEBUG nova.compute.manager [req-5d776e59-95b3-4534-9644-1beec4b62f2e req-88f779b8-58d2-4d4a-a25c-12716701f454 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] No waiting events found dispatching network-vif-unplugged-fa58a88f-dd18-4a95-98c8-21e845485f69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.454 189020 DEBUG nova.compute.manager [req-5d776e59-95b3-4534-9644-1beec4b62f2e req-88f779b8-58d2-4d4a-a25c-12716701f454 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received event network-vif-unplugged-fa58a88f-dd18-4a95-98c8-21e845485f69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 18 15:10:30 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:30.675 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.676 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:30 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:30.677 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:10:30 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:30.679 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.917 189020 DEBUG nova.compute.manager [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received event network-changed-fa58a88f-dd18-4a95-98c8-21e845485f69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.917 189020 DEBUG nova.compute.manager [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Refreshing instance network info cache due to event network-changed-fa58a88f-dd18-4a95-98c8-21e845485f69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.917 189020 DEBUG oslo_concurrency.lockutils [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.917 189020 DEBUG oslo_concurrency.lockutils [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:10:30 compute-0 nova_compute[189016]: 2026-02-18 15:10:30.917 189020 DEBUG nova.network.neutron [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Refreshing network info cache for port fa58a88f-dd18-4a95-98c8-21e845485f69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:10:31 compute-0 openstack_network_exporter[208107]: ERROR   15:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:10:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:10:31 compute-0 openstack_network_exporter[208107]: ERROR   15:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:10:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.151 189020 DEBUG nova.network.neutron [-] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.182 189020 INFO nova.compute.manager [-] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Took 2.31 seconds to deallocate network for instance.#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.225 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.226 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.307 189020 DEBUG nova.compute.provider_tree [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.328 189020 DEBUG nova.scheduler.client.report [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.352 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.377 189020 INFO nova.scheduler.client.report [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Deleted allocations for instance 5d11e815-9fde-4624-9556-a726f1b266ba#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.384 189020 DEBUG nova.network.neutron [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updated VIF entry in instance network info cache for port fa58a88f-dd18-4a95-98c8-21e845485f69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.385 189020 DEBUG nova.network.neutron [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Updating instance_info_cache with network_info: [{"id": "fa58a88f-dd18-4a95-98c8-21e845485f69", "address": "fa:16:3e:74:22:e0", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.174", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa58a88f-dd", "ovs_interfaceid": "fa58a88f-dd18-4a95-98c8-21e845485f69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.409 189020 DEBUG oslo_concurrency.lockutils [req-d2a85a30-e3c0-4b7c-9a8c-69aac55e4c9c req-089b7e07-1112-4de3-85c5-c40f6b4ffb7b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-5d11e815-9fde-4624-9556-a726f1b266ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.446 189020 DEBUG oslo_concurrency.lockutils [None req-dd5dbf71-cb2f-4888-ade0-8e44d78149e6 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.450 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.536 189020 DEBUG nova.compute.manager [req-87acddd6-a3a3-487f-bf78-39689c4b6f1e req-cb2a9034-2590-47b5-8fb9-b88d50b1a564 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received event network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.537 189020 DEBUG oslo_concurrency.lockutils [req-87acddd6-a3a3-487f-bf78-39689c4b6f1e req-cb2a9034-2590-47b5-8fb9-b88d50b1a564 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.538 189020 DEBUG oslo_concurrency.lockutils [req-87acddd6-a3a3-487f-bf78-39689c4b6f1e req-cb2a9034-2590-47b5-8fb9-b88d50b1a564 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.538 189020 DEBUG oslo_concurrency.lockutils [req-87acddd6-a3a3-487f-bf78-39689c4b6f1e req-cb2a9034-2590-47b5-8fb9-b88d50b1a564 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "5d11e815-9fde-4624-9556-a726f1b266ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.538 189020 DEBUG nova.compute.manager [req-87acddd6-a3a3-487f-bf78-39689c4b6f1e req-cb2a9034-2590-47b5-8fb9-b88d50b1a564 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] No waiting events found dispatching network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:10:32 compute-0 nova_compute[189016]: 2026-02-18 15:10:32.539 189020 WARNING nova.compute.manager [req-87acddd6-a3a3-487f-bf78-39689c4b6f1e req-cb2a9034-2590-47b5-8fb9-b88d50b1a564 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Received unexpected event network-vif-plugged-fa58a88f-dd18-4a95-98c8-21e845485f69 for instance with vm_state deleted and task_state None.#033[00m
Feb 18 15:10:34 compute-0 nova_compute[189016]: 2026-02-18 15:10:34.743 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:35 compute-0 podman[250414]: 2026-02-18 15:10:35.783588961 +0000 UTC m=+0.104675688 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 18 15:10:35 compute-0 podman[250415]: 2026-02-18 15:10:35.788263977 +0000 UTC m=+0.108925823 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 15:10:37 compute-0 nova_compute[189016]: 2026-02-18 15:10:37.453 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:37 compute-0 podman[250453]: 2026-02-18 15:10:37.804118318 +0000 UTC m=+0.123051623 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 18 15:10:39 compute-0 nova_compute[189016]: 2026-02-18 15:10:39.747 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:41.452 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:41.453 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:41.454 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:42 compute-0 nova_compute[189016]: 2026-02-18 15:10:42.455 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:44 compute-0 nova_compute[189016]: 2026-02-18 15:10:44.719 189020 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771427429.7165012, 5d11e815-9fde-4624-9556-a726f1b266ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:10:44 compute-0 nova_compute[189016]: 2026-02-18 15:10:44.719 189020 INFO nova.compute.manager [-] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] VM Stopped (Lifecycle Event)#033[00m
Feb 18 15:10:44 compute-0 nova_compute[189016]: 2026-02-18 15:10:44.746 189020 DEBUG nova.compute.manager [None req-06c873d7-d206-4b19-b59a-a6463b05f783 - - - - - -] [instance: 5d11e815-9fde-4624-9556-a726f1b266ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:10:44 compute-0 nova_compute[189016]: 2026-02-18 15:10:44.750 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.458 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.535 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.536 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.536 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.536 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.536 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.538 189020 INFO nova.compute.manager [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Terminating instance#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.539 189020 DEBUG nova.compute.manager [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 18 15:10:47 compute-0 kernel: tap15d6e821-44 (unregistering): left promiscuous mode
Feb 18 15:10:47 compute-0 NetworkManager[57258]: <info>  [1771427447.5804] device (tap15d6e821-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.587 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 ovn_controller[99062]: 2026-02-18T15:10:47Z|00061|binding|INFO|Releasing lport 15d6e821-445c-43a7-a37c-e5f1566673fe from this chassis (sb_readonly=0)
Feb 18 15:10:47 compute-0 ovn_controller[99062]: 2026-02-18T15:10:47Z|00062|binding|INFO|Setting lport 15d6e821-445c-43a7-a37c-e5f1566673fe down in Southbound
Feb 18 15:10:47 compute-0 ovn_controller[99062]: 2026-02-18T15:10:47Z|00063|binding|INFO|Removing iface tap15d6e821-44 ovn-installed in OVS
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.592 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.596 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:55:86 192.168.0.87'], port_security=['fa:16:3e:c4:55:86 192.168.0.87'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.87/24', 'neutron:device_id': 'debb3011-9258-4f04-9eb4-592cc56eb3eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71c6c5d63b07447388ace322f081ffc3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37e3ac68-e35f-4df2-b2af-136d5a1ee2d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af26da4e-fd70-4a49-a6e8-0a984b969598, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=15d6e821-445c-43a7-a37c-e5f1566673fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.598 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 15d6e821-445c-43a7-a37c-e5f1566673fe in datapath c269c00a-f738-4cb6-ac67-09050c56f9f2 unbound from our chassis#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.600 108400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c269c00a-f738-4cb6-ac67-09050c56f9f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.602 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf6bf7e-14b2-4048-9fef-bb6dc0c07652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.603 108400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2 namespace which is not needed anymore#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.604 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb 18 15:10:47 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 31.996s CPU time.
Feb 18 15:10:47 compute-0 systemd-machined[158361]: Machine qemu-1-instance-00000001 terminated.
Feb 18 15:10:47 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [NOTICE]   (242425) : haproxy version is 2.8.14-c23fe91
Feb 18 15:10:47 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [NOTICE]   (242425) : path to executable is /usr/sbin/haproxy
Feb 18 15:10:47 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [WARNING]  (242425) : Exiting Master process...
Feb 18 15:10:47 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [WARNING]  (242425) : Exiting Master process...
Feb 18 15:10:47 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [ALERT]    (242425) : Current worker (242427) exited with code 143 (Terminated)
Feb 18 15:10:47 compute-0 neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2[242421]: [WARNING]  (242425) : All workers exited. Exiting... (0)
Feb 18 15:10:47 compute-0 systemd[1]: libpod-14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4.scope: Deactivated successfully.
Feb 18 15:10:47 compute-0 podman[250505]: 2026-02-18 15:10:47.784087575 +0000 UTC m=+0.063980758 container died 14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Feb 18 15:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4-userdata-shm.mount: Deactivated successfully.
Feb 18 15:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d958937cf8ccc7844da577acfc8d789a743c32067c168d385000a3c0f5ba519-merged.mount: Deactivated successfully.
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.828 189020 DEBUG nova.compute.manager [req-8acd28ce-e0ab-41d5-b5f7-7be3ee433452 req-b19857bf-6aed-47a1-a337-7f19650b4d63 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-vif-unplugged-15d6e821-445c-43a7-a37c-e5f1566673fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.830 189020 DEBUG oslo_concurrency.lockutils [req-8acd28ce-e0ab-41d5-b5f7-7be3ee433452 req-b19857bf-6aed-47a1-a337-7f19650b4d63 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.831 189020 DEBUG oslo_concurrency.lockutils [req-8acd28ce-e0ab-41d5-b5f7-7be3ee433452 req-b19857bf-6aed-47a1-a337-7f19650b4d63 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.831 189020 DEBUG oslo_concurrency.lockutils [req-8acd28ce-e0ab-41d5-b5f7-7be3ee433452 req-b19857bf-6aed-47a1-a337-7f19650b4d63 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.832 189020 DEBUG nova.compute.manager [req-8acd28ce-e0ab-41d5-b5f7-7be3ee433452 req-b19857bf-6aed-47a1-a337-7f19650b4d63 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] No waiting events found dispatching network-vif-unplugged-15d6e821-445c-43a7-a37c-e5f1566673fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.832 189020 DEBUG nova.compute.manager [req-8acd28ce-e0ab-41d5-b5f7-7be3ee433452 req-b19857bf-6aed-47a1-a337-7f19650b4d63 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-vif-unplugged-15d6e821-445c-43a7-a37c-e5f1566673fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 18 15:10:47 compute-0 podman[250505]: 2026-02-18 15:10:47.834747562 +0000 UTC m=+0.114640725 container cleanup 14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.840 189020 INFO nova.virt.libvirt.driver [-] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Instance destroyed successfully.#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.842 189020 DEBUG nova.objects.instance [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lazy-loading 'resources' on Instance uuid debb3011-9258-4f04-9eb4-592cc56eb3eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:10:47 compute-0 systemd[1]: libpod-conmon-14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4.scope: Deactivated successfully.
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.858 189020 DEBUG nova.virt.libvirt.vif [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T14:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-18T14:52:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71c6c5d63b07447388ace322f081ffc3',ramdisk_id='',reservation_id='r-kc9s0061',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='7cc2a96a-1e6c-474d-b671-0e2626bf4158',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T14:52:45Z,user_data=None,user_id='387d978e2b494e88ad13abae2a83321d',uuid=debb3011-9258-4f04-9eb4-592cc56eb3eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.859 189020 DEBUG nova.network.os_vif_util [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converting VIF {"id": "15d6e821-445c-43a7-a37c-e5f1566673fe", "address": "fa:16:3e:c4:55:86", "network": {"id": "c269c00a-f738-4cb6-ac67-09050c56f9f2", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71c6c5d63b07447388ace322f081ffc3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15d6e821-44", "ovs_interfaceid": "15d6e821-445c-43a7-a37c-e5f1566673fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.860 189020 DEBUG nova.network.os_vif_util [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:55:86,bridge_name='br-int',has_traffic_filtering=True,id=15d6e821-445c-43a7-a37c-e5f1566673fe,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15d6e821-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.861 189020 DEBUG os_vif [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:55:86,bridge_name='br-int',has_traffic_filtering=True,id=15d6e821-445c-43a7-a37c-e5f1566673fe,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15d6e821-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.863 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.864 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15d6e821-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.866 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.869 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.887 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.896 189020 INFO os_vif [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:55:86,bridge_name='br-int',has_traffic_filtering=True,id=15d6e821-445c-43a7-a37c-e5f1566673fe,network=Network(c269c00a-f738-4cb6-ac67-09050c56f9f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15d6e821-44')#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.897 189020 INFO nova.virt.libvirt.driver [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Deleting instance files /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb_del#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.899 189020 INFO nova.virt.libvirt.driver [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Deletion of /var/lib/nova/instances/debb3011-9258-4f04-9eb4-592cc56eb3eb_del complete#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.956 189020 INFO nova.compute.manager [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.957 189020 DEBUG oslo.service.loopingcall [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.958 189020 DEBUG nova.compute.manager [-] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.958 189020 DEBUG nova.network.neutron [-] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 18 15:10:47 compute-0 podman[250554]: 2026-02-18 15:10:47.960415939 +0000 UTC m=+0.100112384 container remove 14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.966 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4b07e5-ea10-4e83-8a6e-74d405ed7803]: (4, ('Wed Feb 18 03:10:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2 (14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4)\n14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4\nWed Feb 18 03:10:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2 (14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4)\n14faf275f72a2f8ba7ed8f7fcbc9e05f51208d87b86b294145f9b439949686c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.968 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[744949c3-d763-4aa5-ba30-d225c55f5395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.970 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc269c00a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.972 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 kernel: tapc269c00a-f0: left promiscuous mode
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.977 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[19b709e1-bab0-4f8d-bdc8-2b1e87ec835d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:47 compute-0 nova_compute[189016]: 2026-02-18 15:10:47.982 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.992 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[3f791455-ecfe-441d-9fbc-721f30241ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:47 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:47.995 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[64cc5f34-d0be-4d61-8d38-2a07d44f23db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:48.014 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ce5ec2-f443-4b4a-9793-d10192ef1efd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 346638, 'reachable_time': 44302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250567, 'error': None, 'target': 'ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:48 compute-0 systemd[1]: run-netns-ovnmeta\x2dc269c00a\x2df738\x2d4cb6\x2dac67\x2d09050c56f9f2.mount: Deactivated successfully.
Feb 18 15:10:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:48.036 108948 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c269c00a-f738-4cb6-ac67-09050c56f9f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 18 15:10:48 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:10:48.039 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[4c51f024-66be-49c0-b0c1-3ec710783f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.660 189020 DEBUG nova.network.neutron [-] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.686 189020 INFO nova.compute.manager [-] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Took 1.73 seconds to deallocate network for instance.#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.734 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.735 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.740 189020 DEBUG nova.compute.manager [req-8eee0b6b-b922-4b08-853d-866ed8c3b3dd req-d2e8828d-21c2-467a-99bb-b49900fe6fea af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-vif-deleted-15d6e821-445c-43a7-a37c-e5f1566673fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.805 189020 DEBUG nova.compute.provider_tree [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.819 189020 DEBUG nova.scheduler.client.report [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.848 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.878 189020 INFO nova.scheduler.client.report [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Deleted allocations for instance debb3011-9258-4f04-9eb4-592cc56eb3eb#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.930 189020 DEBUG nova.compute.manager [req-33e097d3-901e-44e0-a87a-4a28b5c209ae req-ffda1894-e1fc-4bc1-894b-f4693845db3b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received event network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.931 189020 DEBUG oslo_concurrency.lockutils [req-33e097d3-901e-44e0-a87a-4a28b5c209ae req-ffda1894-e1fc-4bc1-894b-f4693845db3b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.936 189020 DEBUG oslo_concurrency.lockutils [req-33e097d3-901e-44e0-a87a-4a28b5c209ae req-ffda1894-e1fc-4bc1-894b-f4693845db3b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.937 189020 DEBUG oslo_concurrency.lockutils [req-33e097d3-901e-44e0-a87a-4a28b5c209ae req-ffda1894-e1fc-4bc1-894b-f4693845db3b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.937 189020 DEBUG nova.compute.manager [req-33e097d3-901e-44e0-a87a-4a28b5c209ae req-ffda1894-e1fc-4bc1-894b-f4693845db3b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] No waiting events found dispatching network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.937 189020 WARNING nova.compute.manager [req-33e097d3-901e-44e0-a87a-4a28b5c209ae req-ffda1894-e1fc-4bc1-894b-f4693845db3b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Received unexpected event network-vif-plugged-15d6e821-445c-43a7-a37c-e5f1566673fe for instance with vm_state deleted and task_state None.#033[00m
Feb 18 15:10:49 compute-0 nova_compute[189016]: 2026-02-18 15:10:49.955 189020 DEBUG oslo_concurrency.lockutils [None req-9d80c543-9e18-4966-9937-f5a0f03446bd 387d978e2b494e88ad13abae2a83321d 71c6c5d63b07447388ace322f081ffc3 - - default default] Lock "debb3011-9258-4f04-9eb4-592cc56eb3eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:50 compute-0 podman[250572]: 2026-02-18 15:10:50.74586548 +0000 UTC m=+0.070147091 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, release=1770267347, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible)
Feb 18 15:10:50 compute-0 podman[250571]: 2026-02-18 15:10:50.74868323 +0000 UTC m=+0.071729971 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.062 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.063 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.063 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.091 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.092 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.093 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.093 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.460 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:52 compute-0 nova_compute[189016]: 2026-02-18 15:10:52.866 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:55 compute-0 nova_compute[189016]: 2026-02-18 15:10:55.053 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.194 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.195 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.304 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.304 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.305 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.306 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:10:57 compute-0 podman[250619]: 2026-02-18 15:10:57.463616662 +0000 UTC m=+0.100722400 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.462 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:57 compute-0 podman[250618]: 2026-02-18 15:10:57.477796754 +0000 UTC m=+0.114003849 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 18 15:10:57 compute-0 podman[250617]: 2026-02-18 15:10:57.481028564 +0000 UTC m=+0.121959406 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.670 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.671 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5340MB free_disk=72.24028778076172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.672 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.672 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.865 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.866 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.869 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.890 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:10:57 compute-0 nova_compute[189016]: 2026-02-18 15:10:57.939 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:10:58 compute-0 nova_compute[189016]: 2026-02-18 15:10:58.109 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:10:58 compute-0 nova_compute[189016]: 2026-02-18 15:10:58.109 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:10:59 compute-0 podman[204930]: time="2026-02-18T15:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:10:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:10:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Feb 18 15:11:00 compute-0 nova_compute[189016]: 2026-02-18 15:11:00.967 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:01 compute-0 openstack_network_exporter[208107]: ERROR   15:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:11:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:11:01 compute-0 openstack_network_exporter[208107]: ERROR   15:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:11:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:11:02 compute-0 nova_compute[189016]: 2026-02-18 15:11:02.465 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:02 compute-0 nova_compute[189016]: 2026-02-18 15:11:02.834 189020 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771427447.830118, debb3011-9258-4f04-9eb4-592cc56eb3eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:11:02 compute-0 nova_compute[189016]: 2026-02-18 15:11:02.835 189020 INFO nova.compute.manager [-] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] VM Stopped (Lifecycle Event)#033[00m
Feb 18 15:11:02 compute-0 nova_compute[189016]: 2026-02-18 15:11:02.854 189020 DEBUG nova.compute.manager [None req-ae741c9c-9bd2-4f24-953b-41eefd33e6b5 - - - - - -] [instance: debb3011-9258-4f04-9eb4-592cc56eb3eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:11:02 compute-0 nova_compute[189016]: 2026-02-18 15:11:02.871 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:06 compute-0 nova_compute[189016]: 2026-02-18 15:11:06.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:06 compute-0 podman[250674]: 2026-02-18 15:11:06.737328346 +0000 UTC m=+0.065593098 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 15:11:06 compute-0 podman[250673]: 2026-02-18 15:11:06.745442628 +0000 UTC m=+0.074100320 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:11:07 compute-0 nova_compute[189016]: 2026-02-18 15:11:07.468 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:07 compute-0 nova_compute[189016]: 2026-02-18 15:11:07.874 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:08 compute-0 podman[250718]: 2026-02-18 15:11:08.819036729 +0000 UTC m=+0.144142497 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 18 15:11:12 compute-0 nova_compute[189016]: 2026-02-18 15:11:12.473 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:12 compute-0 nova_compute[189016]: 2026-02-18 15:11:12.876 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:17 compute-0 nova_compute[189016]: 2026-02-18 15:11:17.475 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:17 compute-0 nova_compute[189016]: 2026-02-18 15:11:17.880 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:19 compute-0 ovn_controller[99062]: 2026-02-18T15:11:19Z|00064|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Feb 18 15:11:21 compute-0 podman[250743]: 2026-02-18 15:11:21.774394398 +0000 UTC m=+0.095956231 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:11:21 compute-0 podman[250744]: 2026-02-18 15:11:21.780040628 +0000 UTC m=+0.096837793 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, version=9.7, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 18 15:11:22 compute-0 nova_compute[189016]: 2026-02-18 15:11:22.478 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:22 compute-0 nova_compute[189016]: 2026-02-18 15:11:22.883 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.199 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.200 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.200 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.202 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.204 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.205 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.206 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.206 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.206 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.207 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.208 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.208 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.208 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.209 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.212 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.212 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.212 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.213 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:11:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:11:27 compute-0 nova_compute[189016]: 2026-02-18 15:11:27.481 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:27 compute-0 podman[250787]: 2026-02-18 15:11:27.775100343 +0000 UTC m=+0.063273389 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 15:11:27 compute-0 podman[250788]: 2026-02-18 15:11:27.787687826 +0000 UTC m=+0.072256553 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 15:11:27 compute-0 podman[250789]: 2026-02-18 15:11:27.792114385 +0000 UTC m=+0.071759650 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, config_id=kepler, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 15:11:27 compute-0 nova_compute[189016]: 2026-02-18 15:11:27.886 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:29 compute-0 podman[204930]: time="2026-02-18T15:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:11:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:11:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Feb 18 15:11:31 compute-0 openstack_network_exporter[208107]: ERROR   15:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:11:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:11:31 compute-0 openstack_network_exporter[208107]: ERROR   15:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:11:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:11:32 compute-0 nova_compute[189016]: 2026-02-18 15:11:32.483 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:32 compute-0 nova_compute[189016]: 2026-02-18 15:11:32.890 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:37 compute-0 nova_compute[189016]: 2026-02-18 15:11:37.487 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:37 compute-0 podman[250843]: 2026-02-18 15:11:37.593321537 +0000 UTC m=+0.067619106 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260216, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 18 15:11:37 compute-0 podman[250844]: 2026-02-18 15:11:37.611453159 +0000 UTC m=+0.087360378 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 15:11:37 compute-0 nova_compute[189016]: 2026-02-18 15:11:37.892 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:39 compute-0 podman[250885]: 2026-02-18 15:11:39.771481181 +0000 UTC m=+0.101973761 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 15:11:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:11:41.455 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:11:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:11:41.457 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:11:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:11:41.458 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:11:42 compute-0 nova_compute[189016]: 2026-02-18 15:11:42.490 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:42 compute-0 nova_compute[189016]: 2026-02-18 15:11:42.895 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:47 compute-0 nova_compute[189016]: 2026-02-18 15:11:47.492 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:47 compute-0 nova_compute[189016]: 2026-02-18 15:11:47.897 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.049 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.049 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.067 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.067 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.068 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.068 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.493 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:52 compute-0 podman[250912]: 2026-02-18 15:11:52.73903258 +0000 UTC m=+0.062413796 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:11:52 compute-0 podman[250913]: 2026-02-18 15:11:52.774216827 +0000 UTC m=+0.094587358 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.7, config_id=openstack_network_exporter, distribution-scope=public, release=1770267347, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container)
Feb 18 15:11:52 compute-0 nova_compute[189016]: 2026-02-18 15:11:52.899 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:55 compute-0 nova_compute[189016]: 2026-02-18 15:11:55.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:57 compute-0 nova_compute[189016]: 2026-02-18 15:11:57.496 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:57 compute-0 nova_compute[189016]: 2026-02-18 15:11:57.903 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.101 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.102 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.103 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.103 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.433 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.436 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5335MB free_disk=72.24028778076172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.436 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.437 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.668 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.669 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.695 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.728 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.730 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:11:58 compute-0 nova_compute[189016]: 2026-02-18 15:11:58.730 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:11:58 compute-0 podman[250957]: 2026-02-18 15:11:58.756850155 +0000 UTC m=+0.078176189 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Feb 18 15:11:58 compute-0 podman[250958]: 2026-02-18 15:11:58.75868011 +0000 UTC m=+0.078378054 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release-0.7.12=, config_id=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Feb 18 15:11:58 compute-0 podman[250956]: 2026-02-18 15:11:58.774913205 +0000 UTC m=+0.100429193 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 18 15:11:59 compute-0 podman[204930]: time="2026-02-18T15:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:11:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:11:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Feb 18 15:12:01 compute-0 openstack_network_exporter[208107]: ERROR   15:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:12:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:12:01 compute-0 openstack_network_exporter[208107]: ERROR   15:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:12:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:12:01 compute-0 nova_compute[189016]: 2026-02-18 15:12:01.731 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:02 compute-0 nova_compute[189016]: 2026-02-18 15:12:02.499 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:03 compute-0 nova_compute[189016]: 2026-02-18 15:12:03.118 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:07 compute-0 nova_compute[189016]: 2026-02-18 15:12:07.053 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:07 compute-0 nova_compute[189016]: 2026-02-18 15:12:07.508 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:07 compute-0 podman[251014]: 2026-02-18 15:12:07.724206643 +0000 UTC m=+0.050080329 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 15:12:07 compute-0 podman[251013]: 2026-02-18 15:12:07.728253013 +0000 UTC m=+0.058733464 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 18 15:12:08 compute-0 nova_compute[189016]: 2026-02-18 15:12:08.127 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:10 compute-0 podman[251056]: 2026-02-18 15:12:10.796662268 +0000 UTC m=+0.126241786 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 18 15:12:12 compute-0 nova_compute[189016]: 2026-02-18 15:12:12.514 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:13 compute-0 nova_compute[189016]: 2026-02-18 15:12:13.131 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:17 compute-0 nova_compute[189016]: 2026-02-18 15:12:17.518 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:18 compute-0 nova_compute[189016]: 2026-02-18 15:12:18.133 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:22 compute-0 nova_compute[189016]: 2026-02-18 15:12:22.520 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:23 compute-0 nova_compute[189016]: 2026-02-18 15:12:23.135 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:23 compute-0 podman[251082]: 2026-02-18 15:12:23.732577179 +0000 UTC m=+0.058200332 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 18 15:12:23 compute-0 podman[251083]: 2026-02-18 15:12:23.761244323 +0000 UTC m=+0.085824730 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.7, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7)
Feb 18 15:12:27 compute-0 nova_compute[189016]: 2026-02-18 15:12:27.523 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:28 compute-0 nova_compute[189016]: 2026-02-18 15:12:28.137 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:29 compute-0 podman[251124]: 2026-02-18 15:12:29.74419887 +0000 UTC m=+0.072052367 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 18 15:12:29 compute-0 podman[204930]: time="2026-02-18T15:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:12:29 compute-0 podman[251125]: 2026-02-18 15:12:29.751284936 +0000 UTC m=+0.075670776 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 15:12:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:12:29 compute-0 podman[251123]: 2026-02-18 15:12:29.766225679 +0000 UTC m=+0.095044490 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:12:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Feb 18 15:12:31 compute-0 openstack_network_exporter[208107]: ERROR   15:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:12:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:12:31 compute-0 openstack_network_exporter[208107]: ERROR   15:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:12:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:12:32 compute-0 nova_compute[189016]: 2026-02-18 15:12:32.524 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:33 compute-0 nova_compute[189016]: 2026-02-18 15:12:33.140 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:37 compute-0 nova_compute[189016]: 2026-02-18 15:12:37.527 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:38 compute-0 nova_compute[189016]: 2026-02-18 15:12:38.144 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:38 compute-0 podman[251182]: 2026-02-18 15:12:38.763515582 +0000 UTC m=+0.085414819 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 15:12:38 compute-0 podman[251181]: 2026-02-18 15:12:38.778026154 +0000 UTC m=+0.096508036 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 18 15:12:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:12:41.456 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:12:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:12:41.457 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:12:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:12:41.457 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:12:41 compute-0 podman[251220]: 2026-02-18 15:12:41.775589504 +0000 UTC m=+0.104100055 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 18 15:12:42 compute-0 nova_compute[189016]: 2026-02-18 15:12:42.528 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:43 compute-0 nova_compute[189016]: 2026-02-18 15:12:43.145 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:47 compute-0 nova_compute[189016]: 2026-02-18 15:12:47.529 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:48 compute-0 nova_compute[189016]: 2026-02-18 15:12:48.148 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:52 compute-0 nova_compute[189016]: 2026-02-18 15:12:52.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:52 compute-0 nova_compute[189016]: 2026-02-18 15:12:52.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:52 compute-0 nova_compute[189016]: 2026-02-18 15:12:52.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:12:52 compute-0 nova_compute[189016]: 2026-02-18 15:12:52.534 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:53 compute-0 nova_compute[189016]: 2026-02-18 15:12:53.046 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:53 compute-0 nova_compute[189016]: 2026-02-18 15:12:53.151 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:54 compute-0 nova_compute[189016]: 2026-02-18 15:12:54.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:54 compute-0 nova_compute[189016]: 2026-02-18 15:12:54.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:12:54 compute-0 nova_compute[189016]: 2026-02-18 15:12:54.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:12:54 compute-0 nova_compute[189016]: 2026-02-18 15:12:54.134 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 15:12:54 compute-0 podman[251247]: 2026-02-18 15:12:54.749898152 +0000 UTC m=+0.070038216 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, managed_by=edpm_ansible, release=1770267347, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, architecture=x86_64)
Feb 18 15:12:54 compute-0 podman[251246]: 2026-02-18 15:12:54.75220753 +0000 UTC m=+0.071465332 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:12:56 compute-0 nova_compute[189016]: 2026-02-18 15:12:56.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:57 compute-0 nova_compute[189016]: 2026-02-18 15:12:57.536 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:58 compute-0 nova_compute[189016]: 2026-02-18 15:12:58.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:12:58 compute-0 nova_compute[189016]: 2026-02-18 15:12:58.154 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:12:59 compute-0 podman[204930]: time="2026-02-18T15:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:12:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:12:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3916 "" "Go-http-client/1.1"
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.087 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.087 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.087 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.088 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.446 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.448 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5353MB free_disk=72.24028778076172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.448 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.449 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.524 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.524 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.541 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing inventories for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.570 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating ProviderTree inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.571 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.586 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing aggregate associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.616 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing trait associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, traits: HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.642 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.659 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.661 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:13:00 compute-0 nova_compute[189016]: 2026-02-18 15:13:00.661 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:13:00 compute-0 podman[251291]: 2026-02-18 15:13:00.745154884 +0000 UTC m=+0.067204906 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Feb 18 15:13:00 compute-0 podman[251290]: 2026-02-18 15:13:00.747886502 +0000 UTC m=+0.078890807 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 18 15:13:00 compute-0 podman[251292]: 2026-02-18 15:13:00.777457078 +0000 UTC m=+0.098935916 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc.)
Feb 18 15:13:01 compute-0 openstack_network_exporter[208107]: ERROR   15:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:13:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:13:01 compute-0 openstack_network_exporter[208107]: ERROR   15:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:13:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:13:01 compute-0 nova_compute[189016]: 2026-02-18 15:13:01.655 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:02 compute-0 nova_compute[189016]: 2026-02-18 15:13:02.538 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:03 compute-0 nova_compute[189016]: 2026-02-18 15:13:03.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:03 compute-0 nova_compute[189016]: 2026-02-18 15:13:03.156 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:07 compute-0 nova_compute[189016]: 2026-02-18 15:13:07.540 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:08 compute-0 nova_compute[189016]: 2026-02-18 15:13:08.160 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:09 compute-0 nova_compute[189016]: 2026-02-18 15:13:09.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:09 compute-0 podman[251348]: 2026-02-18 15:13:09.789566721 +0000 UTC m=+0.096172707 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 15:13:09 compute-0 podman[251347]: 2026-02-18 15:13:09.808830821 +0000 UTC m=+0.121888888 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644)
Feb 18 15:13:12 compute-0 nova_compute[189016]: 2026-02-18 15:13:12.545 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:12 compute-0 podman[251391]: 2026-02-18 15:13:12.778494536 +0000 UTC m=+0.105201512 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 15:13:13 compute-0 nova_compute[189016]: 2026-02-18 15:13:13.163 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:17 compute-0 nova_compute[189016]: 2026-02-18 15:13:17.546 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:18 compute-0 nova_compute[189016]: 2026-02-18 15:13:18.165 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:22 compute-0 nova_compute[189016]: 2026-02-18 15:13:22.548 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:23 compute-0 nova_compute[189016]: 2026-02-18 15:13:23.166 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.199 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.201 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.201 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.204 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.205 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.206 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.206 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 rsyslogd[239561]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.208 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dd90>] with cache [{}], pollster history [{'disk.device.read.latency': [], 'disk.device.read.bytes': [], 'disk.device.read.requests': [], 'disk.device.allocation': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.214 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.215 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:13:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:13:25 compute-0 podman[251421]: 2026-02-18 15:13:25.748067246 +0000 UTC m=+0.069911943 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:13:25 compute-0 podman[251422]: 2026-02-18 15:13:25.785396386 +0000 UTC m=+0.097655984 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 18 15:13:27 compute-0 nova_compute[189016]: 2026-02-18 15:13:27.549 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:28 compute-0 nova_compute[189016]: 2026-02-18 15:13:28.170 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:29 compute-0 podman[204930]: time="2026-02-18T15:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:13:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:13:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Feb 18 15:13:31 compute-0 openstack_network_exporter[208107]: ERROR   15:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:13:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:13:31 compute-0 openstack_network_exporter[208107]: ERROR   15:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:13:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:13:31 compute-0 podman[251468]: 2026-02-18 15:13:31.749628346 +0000 UTC m=+0.073225576 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 18 15:13:31 compute-0 podman[251466]: 2026-02-18 15:13:31.764140667 +0000 UTC m=+0.094489945 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 18 15:13:31 compute-0 podman[251467]: 2026-02-18 15:13:31.780751931 +0000 UTC m=+0.097320746 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 18 15:13:32 compute-0 nova_compute[189016]: 2026-02-18 15:13:32.553 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:33 compute-0 nova_compute[189016]: 2026-02-18 15:13:33.173 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:37 compute-0 nova_compute[189016]: 2026-02-18 15:13:37.556 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:38 compute-0 nova_compute[189016]: 2026-02-18 15:13:38.177 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:40 compute-0 podman[251521]: 2026-02-18 15:13:40.743383369 +0000 UTC m=+0.065632373 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:13:40 compute-0 podman[251520]: 2026-02-18 15:13:40.769533353 +0000 UTC m=+0.096299050 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 18 15:13:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:13:41.457 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:13:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:13:41.458 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:13:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:13:41.458 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:13:42 compute-0 nova_compute[189016]: 2026-02-18 15:13:42.561 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:43 compute-0 nova_compute[189016]: 2026-02-18 15:13:43.179 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:43 compute-0 podman[251562]: 2026-02-18 15:13:43.77671953 +0000 UTC m=+0.105133291 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:13:47 compute-0 nova_compute[189016]: 2026-02-18 15:13:47.563 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:48 compute-0 nova_compute[189016]: 2026-02-18 15:13:48.182 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:52 compute-0 nova_compute[189016]: 2026-02-18 15:13:52.567 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:53 compute-0 nova_compute[189016]: 2026-02-18 15:13:53.044 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:53 compute-0 nova_compute[189016]: 2026-02-18 15:13:53.049 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:53 compute-0 nova_compute[189016]: 2026-02-18 15:13:53.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:13:53 compute-0 nova_compute[189016]: 2026-02-18 15:13:53.184 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:54 compute-0 nova_compute[189016]: 2026-02-18 15:13:54.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:55 compute-0 nova_compute[189016]: 2026-02-18 15:13:55.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:55 compute-0 nova_compute[189016]: 2026-02-18 15:13:55.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:13:55 compute-0 nova_compute[189016]: 2026-02-18 15:13:55.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:13:55 compute-0 nova_compute[189016]: 2026-02-18 15:13:55.406 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 15:13:56 compute-0 podman[251589]: 2026-02-18 15:13:56.743651015 +0000 UTC m=+0.061931591 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:13:56 compute-0 podman[251590]: 2026-02-18 15:13:56.769694916 +0000 UTC m=+0.079869439 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git)
Feb 18 15:13:57 compute-0 nova_compute[189016]: 2026-02-18 15:13:57.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:13:57 compute-0 nova_compute[189016]: 2026-02-18 15:13:57.569 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:58 compute-0 nova_compute[189016]: 2026-02-18 15:13:58.186 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:13:59 compute-0 podman[204930]: time="2026-02-18T15:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:13:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:13:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.099 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.100 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.100 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.100 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.455 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.457 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5355MB free_disk=72.24028778076172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.457 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.457 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.566 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.567 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.593 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.618 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.619 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:14:00 compute-0 nova_compute[189016]: 2026-02-18 15:14:00.619 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:14:01 compute-0 openstack_network_exporter[208107]: ERROR   15:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:14:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:14:01 compute-0 openstack_network_exporter[208107]: ERROR   15:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:14:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:14:02 compute-0 nova_compute[189016]: 2026-02-18 15:14:02.571 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:02 compute-0 podman[251633]: 2026-02-18 15:14:02.752243095 +0000 UTC m=+0.071972992 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Feb 18 15:14:02 compute-0 podman[251634]: 2026-02-18 15:14:02.762551733 +0000 UTC m=+0.082249019 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-container, container_name=kepler, version=9.4, io.openshift.expose-services=, name=ubi9)
Feb 18 15:14:02 compute-0 podman[251632]: 2026-02-18 15:14:02.773010064 +0000 UTC m=+0.098111735 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 18 15:14:03 compute-0 nova_compute[189016]: 2026-02-18 15:14:03.189 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:05 compute-0 nova_compute[189016]: 2026-02-18 15:14:05.620 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:07 compute-0 nova_compute[189016]: 2026-02-18 15:14:07.573 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:08 compute-0 nova_compute[189016]: 2026-02-18 15:14:08.192 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:10 compute-0 nova_compute[189016]: 2026-02-18 15:14:10.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:11 compute-0 podman[251686]: 2026-02-18 15:14:11.746276007 +0000 UTC m=+0.066596037 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 15:14:11 compute-0 podman[251685]: 2026-02-18 15:14:11.750251726 +0000 UTC m=+0.071884799 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute)
Feb 18 15:14:12 compute-0 nova_compute[189016]: 2026-02-18 15:14:12.577 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:13 compute-0 nova_compute[189016]: 2026-02-18 15:14:13.196 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:14 compute-0 podman[251727]: 2026-02-18 15:14:14.785912125 +0000 UTC m=+0.107156732 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 18 15:14:17 compute-0 nova_compute[189016]: 2026-02-18 15:14:17.579 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:18 compute-0 nova_compute[189016]: 2026-02-18 15:14:18.198 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:22 compute-0 nova_compute[189016]: 2026-02-18 15:14:22.581 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:23 compute-0 nova_compute[189016]: 2026-02-18 15:14:23.201 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:27 compute-0 nova_compute[189016]: 2026-02-18 15:14:27.583 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:27 compute-0 podman[251754]: 2026-02-18 15:14:27.73837551 +0000 UTC m=+0.067553211 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:14:27 compute-0 podman[251755]: 2026-02-18 15:14:27.758084163 +0000 UTC m=+0.081156342 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, version=9.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 18 15:14:28 compute-0 nova_compute[189016]: 2026-02-18 15:14:28.208 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:29 compute-0 podman[204930]: time="2026-02-18T15:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:14:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:14:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Feb 18 15:14:31 compute-0 openstack_network_exporter[208107]: ERROR   15:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:14:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:14:31 compute-0 openstack_network_exporter[208107]: ERROR   15:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:14:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:14:32 compute-0 nova_compute[189016]: 2026-02-18 15:14:32.586 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:33 compute-0 nova_compute[189016]: 2026-02-18 15:14:33.220 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:33 compute-0 podman[251796]: 2026-02-18 15:14:33.754717492 +0000 UTC m=+0.070897854 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, config_id=kepler, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9)
Feb 18 15:14:33 compute-0 podman[251794]: 2026-02-18 15:14:33.760410315 +0000 UTC m=+0.085792628 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 18 15:14:33 compute-0 podman[251795]: 2026-02-18 15:14:33.765209475 +0000 UTC m=+0.087546451 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 18 15:14:37 compute-0 nova_compute[189016]: 2026-02-18 15:14:37.589 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:38 compute-0 nova_compute[189016]: 2026-02-18 15:14:38.223 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:14:41.458 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:14:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:14:41.459 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:14:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:14:41.459 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:14:42 compute-0 nova_compute[189016]: 2026-02-18 15:14:42.591 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:42 compute-0 podman[251850]: 2026-02-18 15:14:42.748848846 +0000 UTC m=+0.071509730 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:14:42 compute-0 podman[251851]: 2026-02-18 15:14:42.748883067 +0000 UTC m=+0.071435919 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 18 15:14:43 compute-0 nova_compute[189016]: 2026-02-18 15:14:43.227 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:44 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:14:44.724 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:14:44 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:14:44.726 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:14:44 compute-0 nova_compute[189016]: 2026-02-18 15:14:44.730 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:45 compute-0 podman[251891]: 2026-02-18 15:14:45.79545283 +0000 UTC m=+0.122024554 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 18 15:14:47 compute-0 nova_compute[189016]: 2026-02-18 15:14:47.593 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:48 compute-0 nova_compute[189016]: 2026-02-18 15:14:48.229 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:48 compute-0 nova_compute[189016]: 2026-02-18 15:14:48.622 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:52 compute-0 nova_compute[189016]: 2026-02-18 15:14:52.594 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:52 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:14:52.731 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:14:53 compute-0 nova_compute[189016]: 2026-02-18 15:14:53.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:53 compute-0 nova_compute[189016]: 2026-02-18 15:14:53.232 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:54 compute-0 nova_compute[189016]: 2026-02-18 15:14:54.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:54 compute-0 nova_compute[189016]: 2026-02-18 15:14:54.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:14:55 compute-0 nova_compute[189016]: 2026-02-18 15:14:55.053 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:57 compute-0 nova_compute[189016]: 2026-02-18 15:14:57.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:57 compute-0 nova_compute[189016]: 2026-02-18 15:14:57.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:14:57 compute-0 nova_compute[189016]: 2026-02-18 15:14:57.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:14:57 compute-0 nova_compute[189016]: 2026-02-18 15:14:57.066 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 15:14:57 compute-0 nova_compute[189016]: 2026-02-18 15:14:57.597 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:58 compute-0 nova_compute[189016]: 2026-02-18 15:14:58.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:14:58 compute-0 nova_compute[189016]: 2026-02-18 15:14:58.234 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:14:58 compute-0 podman[251919]: 2026-02-18 15:14:58.780398036 +0000 UTC m=+0.099854789 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, com.redhat.component=ubi9-minimal-container)
Feb 18 15:14:58 compute-0 podman[251918]: 2026-02-18 15:14:58.790901209 +0000 UTC m=+0.117245354 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:14:59 compute-0 podman[204930]: time="2026-02-18T15:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:14:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:14:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Feb 18 15:15:01 compute-0 openstack_network_exporter[208107]: ERROR   15:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:15:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:15:01 compute-0 openstack_network_exporter[208107]: ERROR   15:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:15:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.064 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.065 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.098 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.098 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.098 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.098 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.464 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.466 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5355MB free_disk=72.24205780029297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.466 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.466 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.600 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.741 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.742 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.946 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.967 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.968 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:15:02 compute-0 nova_compute[189016]: 2026-02-18 15:15:02.969 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:03 compute-0 nova_compute[189016]: 2026-02-18 15:15:03.237 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:04 compute-0 podman[251962]: 2026-02-18 15:15:04.758693607 +0000 UTC m=+0.069427848 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 18 15:15:04 compute-0 podman[251963]: 2026-02-18 15:15:04.770065651 +0000 UTC m=+0.080421283 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 18 15:15:04 compute-0 podman[251964]: 2026-02-18 15:15:04.775255581 +0000 UTC m=+0.085422818 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, container_name=kepler, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc.)
Feb 18 15:15:05 compute-0 nova_compute[189016]: 2026-02-18 15:15:05.955 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:07 compute-0 nova_compute[189016]: 2026-02-18 15:15:07.602 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:08 compute-0 nova_compute[189016]: 2026-02-18 15:15:08.245 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:09 compute-0 nova_compute[189016]: 2026-02-18 15:15:09.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:09 compute-0 nova_compute[189016]: 2026-02-18 15:15:09.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 18 15:15:11 compute-0 nova_compute[189016]: 2026-02-18 15:15:11.116 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:12 compute-0 nova_compute[189016]: 2026-02-18 15:15:12.606 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:13 compute-0 nova_compute[189016]: 2026-02-18 15:15:13.250 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:13 compute-0 podman[252018]: 2026-02-18 15:15:13.590016489 +0000 UTC m=+0.063960241 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 18 15:15:13 compute-0 podman[252019]: 2026-02-18 15:15:13.620700857 +0000 UTC m=+0.090968597 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:15:14 compute-0 nova_compute[189016]: 2026-02-18 15:15:14.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:14 compute-0 ovn_controller[99062]: 2026-02-18T15:15:14Z|00065|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Feb 18 15:15:16 compute-0 nova_compute[189016]: 2026-02-18 15:15:16.081 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:16 compute-0 nova_compute[189016]: 2026-02-18 15:15:16.082 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 18 15:15:16 compute-0 nova_compute[189016]: 2026-02-18 15:15:16.111 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 18 15:15:16 compute-0 podman[252059]: 2026-02-18 15:15:16.785650679 +0000 UTC m=+0.106178047 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Feb 18 15:15:17 compute-0 nova_compute[189016]: 2026-02-18 15:15:17.607 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:18 compute-0 nova_compute[189016]: 2026-02-18 15:15:18.254 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:22 compute-0 nova_compute[189016]: 2026-02-18 15:15:22.609 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:23 compute-0 nova_compute[189016]: 2026-02-18 15:15:23.257 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.201 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.203 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.203 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.204 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.206 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.207 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.210 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.210 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.211 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.212 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.213 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.216 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.217 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:15:25.218 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:15:26 compute-0 nova_compute[189016]: 2026-02-18 15:15:26.247 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:27 compute-0 nova_compute[189016]: 2026-02-18 15:15:27.612 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:28 compute-0 nova_compute[189016]: 2026-02-18 15:15:28.260 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:29 compute-0 nova_compute[189016]: 2026-02-18 15:15:29.556 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:29 compute-0 nova_compute[189016]: 2026-02-18 15:15:29.606 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:29 compute-0 nova_compute[189016]: 2026-02-18 15:15:29.699 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:29 compute-0 podman[204930]: time="2026-02-18T15:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:15:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28006 "" "Go-http-client/1.1"
Feb 18 15:15:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Feb 18 15:15:29 compute-0 podman[252087]: 2026-02-18 15:15:29.758797212 +0000 UTC m=+0.072482234 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.buildah.version=1.33.7)
Feb 18 15:15:29 compute-0 podman[252086]: 2026-02-18 15:15:29.761638943 +0000 UTC m=+0.078962406 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:15:30 compute-0 nova_compute[189016]: 2026-02-18 15:15:30.315 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:31 compute-0 openstack_network_exporter[208107]: ERROR   15:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:15:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:15:31 compute-0 openstack_network_exporter[208107]: ERROR   15:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:15:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:15:32 compute-0 nova_compute[189016]: 2026-02-18 15:15:32.615 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:33 compute-0 nova_compute[189016]: 2026-02-18 15:15:33.262 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:33 compute-0 nova_compute[189016]: 2026-02-18 15:15:33.908 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:34 compute-0 nova_compute[189016]: 2026-02-18 15:15:34.959 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:35 compute-0 podman[252134]: 2026-02-18 15:15:35.761670099 +0000 UTC m=+0.077778187 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, config_id=kepler)
Feb 18 15:15:35 compute-0 podman[252133]: 2026-02-18 15:15:35.776388067 +0000 UTC m=+0.099626964 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:15:35 compute-0 podman[252132]: 2026-02-18 15:15:35.781238058 +0000 UTC m=+0.103992613 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 18 15:15:37 compute-0 nova_compute[189016]: 2026-02-18 15:15:37.618 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:38 compute-0 nova_compute[189016]: 2026-02-18 15:15:38.265 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:38 compute-0 nova_compute[189016]: 2026-02-18 15:15:38.841 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:41.459 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:41.462 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:41.462 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:41 compute-0 nova_compute[189016]: 2026-02-18 15:15:41.619 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:42 compute-0 nova_compute[189016]: 2026-02-18 15:15:42.621 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:42 compute-0 nova_compute[189016]: 2026-02-18 15:15:42.815 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:43 compute-0 nova_compute[189016]: 2026-02-18 15:15:43.180 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:43 compute-0 nova_compute[189016]: 2026-02-18 15:15:43.267 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:43 compute-0 nova_compute[189016]: 2026-02-18 15:15:43.401 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:43 compute-0 nova_compute[189016]: 2026-02-18 15:15:43.424 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:43 compute-0 podman[252187]: 2026-02-18 15:15:43.753622342 +0000 UTC m=+0.080862073 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260216, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 18 15:15:43 compute-0 podman[252188]: 2026-02-18 15:15:43.763799127 +0000 UTC m=+0.086150076 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:15:44 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:44.957 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:15:44 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:44.959 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:15:44 compute-0 nova_compute[189016]: 2026-02-18 15:15:44.959 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:46 compute-0 nova_compute[189016]: 2026-02-18 15:15:46.273 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:46 compute-0 nova_compute[189016]: 2026-02-18 15:15:46.375 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.192 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.193 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.218 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.322 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.323 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.336 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.337 189020 INFO nova.compute.claims [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.513 189020 DEBUG nova.compute.provider_tree [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.543 189020 DEBUG nova.scheduler.client.report [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.569 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.570 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.628 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.630 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.631 189020 DEBUG nova.network.neutron [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.654 189020 INFO nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.680 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 15:15:47 compute-0 podman[252230]: 2026-02-18 15:15:47.775483562 +0000 UTC m=+0.100311480 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.801 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.803 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.803 189020 INFO nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Creating image(s)#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.804 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "/var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.805 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "/var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.805 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "/var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.806 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "7b8f481705a33c6196332050fe7f03324002d985" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:47 compute-0 nova_compute[189016]: 2026-02-18 15:15:47.807 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:48 compute-0 nova_compute[189016]: 2026-02-18 15:15:48.269 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:48 compute-0 nova_compute[189016]: 2026-02-18 15:15:48.393 189020 DEBUG nova.policy [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90884ccf9a964b498da9370fd7f4bdce', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8eb42d319554625b909271b1bba25e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.112 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.143 189020 DEBUG nova.network.neutron [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Successfully created port: ef12a070-08ba-4fda-8991-d7b4ec9d9258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.182 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.part --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.183 189020 DEBUG nova.virt.images [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] 3b4a4a6a-1650-453f-ba10-3bb16d71641c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.184 189020 DEBUG nova.privsep.utils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.185 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.part /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.403 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.part /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.converted" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.406 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.479 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.482 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.506 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.558 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.560 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "7b8f481705a33c6196332050fe7f03324002d985" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.560 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.576 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.637 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.638 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.674 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.675 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.676 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.723 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.725 189020 DEBUG nova.virt.disk.api [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Checking if we can resize image /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.725 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.778 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.781 189020 DEBUG nova.virt.disk.api [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Cannot resize image /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.781 189020 DEBUG nova.objects.instance [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lazy-loading 'migration_context' on Instance uuid b81109ce-a363-4a87-b75b-e20429154b94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.796 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.797 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Ensure instance console log exists: /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.798 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.798 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:50 compute-0 nova_compute[189016]: 2026-02-18 15:15:50.799 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:50.962 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:15:52 compute-0 nova_compute[189016]: 2026-02-18 15:15:52.231 189020 DEBUG nova.network.neutron [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Successfully updated port: ef12a070-08ba-4fda-8991-d7b4ec9d9258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 15:15:52 compute-0 nova_compute[189016]: 2026-02-18 15:15:52.247 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "refresh_cache-b81109ce-a363-4a87-b75b-e20429154b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:15:52 compute-0 nova_compute[189016]: 2026-02-18 15:15:52.247 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquired lock "refresh_cache-b81109ce-a363-4a87-b75b-e20429154b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:15:52 compute-0 nova_compute[189016]: 2026-02-18 15:15:52.247 189020 DEBUG nova.network.neutron [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:15:52 compute-0 nova_compute[189016]: 2026-02-18 15:15:52.629 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:52 compute-0 nova_compute[189016]: 2026-02-18 15:15:52.717 189020 DEBUG nova.network.neutron [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.079 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.202 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.203 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.229 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.266 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.267 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.273 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.301 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.345 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.346 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.358 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.359 189020 INFO nova.compute.claims [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.386 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.529 189020 DEBUG nova.compute.provider_tree [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.549 189020 DEBUG nova.scheduler.client.report [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.577 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.578 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.581 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.590 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.590 189020 INFO nova.compute.claims [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.653 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.653 189020 DEBUG nova.network.neutron [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.677 189020 INFO nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.727 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.816 189020 DEBUG nova.compute.manager [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received event network-changed-ef12a070-08ba-4fda-8991-d7b4ec9d9258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.817 189020 DEBUG nova.compute.manager [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Refreshing instance network info cache due to event network-changed-ef12a070-08ba-4fda-8991-d7b4ec9d9258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.817 189020 DEBUG oslo_concurrency.lockutils [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-b81109ce-a363-4a87-b75b-e20429154b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.822 189020 DEBUG nova.compute.provider_tree [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.847 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.850 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.851 189020 INFO nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Creating image(s)#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.852 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.853 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.854 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.875 189020 DEBUG nova.scheduler.client.report [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.880 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.902 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.903 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.935 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.936 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "7b8f481705a33c6196332050fe7f03324002d985" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.936 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.947 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.964 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.965 189020 DEBUG nova.network.neutron [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 15:15:53 compute-0 nova_compute[189016]: 2026-02-18 15:15:53.995 189020 INFO nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.017 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.023 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.024 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.059 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.060 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.061 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.137 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.139 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.140 189020 INFO nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Creating image(s)#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.141 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "/var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.141 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "/var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.142 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "/var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.155 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.157 189020 DEBUG nova.virt.disk.api [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Checking if we can resize image /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.158 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.174 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.225 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.226 189020 DEBUG nova.virt.disk.api [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Cannot resize image /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.227 189020 DEBUG nova.objects.instance [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lazy-loading 'migration_context' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.234 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.235 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "7b8f481705a33c6196332050fe7f03324002d985" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.236 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.248 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.265 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.266 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Ensure instance console log exists: /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.266 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.267 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.267 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.302 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.303 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.338 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.340 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.340 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.392 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.393 189020 DEBUG nova.virt.disk.api [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Checking if we can resize image /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.394 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.444 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.446 189020 DEBUG nova.virt.disk.api [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Cannot resize image /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.446 189020 DEBUG nova.objects.instance [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lazy-loading 'migration_context' on Instance uuid df0877c8-0a3d-46d8-aa05-76d180a75938 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.464 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.465 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Ensure instance console log exists: /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.465 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.466 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.466 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.575 189020 DEBUG nova.policy [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7476a1b8c814ab687793dcb836094b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f278181458244cb4836c191782a17069', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 18 15:15:54 compute-0 nova_compute[189016]: 2026-02-18 15:15:54.747 189020 DEBUG nova.policy [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93b35b58df3646ce83b1a741e5458aa4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b77ca7f6eb84bd0b8aa97e012cf6161', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.008 189020 DEBUG nova.network.neutron [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Updating instance_info_cache with network_info: [{"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.028 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Releasing lock "refresh_cache-b81109ce-a363-4a87-b75b-e20429154b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.029 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Instance network_info: |[{"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.030 189020 DEBUG oslo_concurrency.lockutils [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-b81109ce-a363-4a87-b75b-e20429154b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.030 189020 DEBUG nova.network.neutron [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Refreshing network info cache for port ef12a070-08ba-4fda-8991-d7b4ec9d9258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.033 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Start _get_guest_xml network_info=[{"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.046 189020 WARNING nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.061 189020 DEBUG nova.virt.libvirt.host [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.062 189020 DEBUG nova.virt.libvirt.host [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.067 189020 DEBUG nova.virt.libvirt.host [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.069 189020 DEBUG nova.virt.libvirt.host [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.069 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.070 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T15:14:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1682e27b-a40b-4634-9ba2-5b28d38a8558',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.070 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.071 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.071 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.071 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.071 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.072 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.072 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.072 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.073 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.073 189020 DEBUG nova.virt.hardware [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.077 189020 DEBUG nova.virt.libvirt.vif [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1935075598',display_name='tempest-ServerAddressesTestJSON-server-1935075598',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1935075598',id=6,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8eb42d319554625b909271b1bba25e5',ramdisk_id='',reservation_id='r-in1ix4fv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-13098154',owner_user_name='tempest-ServerAddressesTestJSON-13098154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:47Z,user_data=None,user_id='90884ccf9a964b498da9370fd7f4bdce',uuid=b81109ce-a363-4a87-b75b-e20429154b94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.078 189020 DEBUG nova.network.os_vif_util [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Converting VIF {"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.079 189020 DEBUG nova.network.os_vif_util [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:05:38,bridge_name='br-int',has_traffic_filtering=True,id=ef12a070-08ba-4fda-8991-d7b4ec9d9258,network=Network(ead595d8-b047-43ed-b112-5f20fc88b00b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef12a070-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.080 189020 DEBUG nova.objects.instance [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b81109ce-a363-4a87-b75b-e20429154b94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.095 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <uuid>b81109ce-a363-4a87-b75b-e20429154b94</uuid>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <name>instance-00000006</name>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <memory>131072</memory>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <nova:name>tempest-ServerAddressesTestJSON-server-1935075598</nova:name>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:15:55</nova:creationTime>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <nova:flavor name="m1.nano">
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:memory>128</nova:memory>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:ephemeral>0</nova:ephemeral>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:user uuid="90884ccf9a964b498da9370fd7f4bdce">tempest-ServerAddressesTestJSON-13098154-project-member</nova:user>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:project uuid="f8eb42d319554625b909271b1bba25e5">tempest-ServerAddressesTestJSON-13098154</nova:project>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="3b4a4a6a-1650-453f-ba10-3bb16d71641c"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        <nova:port uuid="ef12a070-08ba-4fda-8991-d7b4ec9d9258">
Feb 18 15:15:55 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <system>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <entry name="serial">b81109ce-a363-4a87-b75b-e20429154b94</entry>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <entry name="uuid">b81109ce-a363-4a87-b75b-e20429154b94</entry>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </system>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <os>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  </os>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <features>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  </features>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk.config"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:90:05:38"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <target dev="tapef12a070-08"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </interface>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/console.log" append="off"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <video>
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </video>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:15:55 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:15:55 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:15:55 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:15:55 compute-0 nova_compute[189016]: </domain>
Feb 18 15:15:55 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.097 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Preparing to wait for external event network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.097 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.097 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.098 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.098 189020 DEBUG nova.virt.libvirt.vif [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1935075598',display_name='tempest-ServerAddressesTestJSON-server-1935075598',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1935075598',id=6,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8eb42d319554625b909271b1bba25e5',ramdisk_id='',reservation_id='r-in1ix4fv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-13098154',owner_user_name='tempest-ServerAddressesTestJSON-13098154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:47Z,user_data=None,user_id='90884ccf9a964b498da9370fd7f4bdce',uuid=b81109ce-a363-4a87-b75b-e20429154b94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.099 189020 DEBUG nova.network.os_vif_util [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Converting VIF {"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.100 189020 DEBUG nova.network.os_vif_util [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:05:38,bridge_name='br-int',has_traffic_filtering=True,id=ef12a070-08ba-4fda-8991-d7b4ec9d9258,network=Network(ead595d8-b047-43ed-b112-5f20fc88b00b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef12a070-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.100 189020 DEBUG os_vif [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:05:38,bridge_name='br-int',has_traffic_filtering=True,id=ef12a070-08ba-4fda-8991-d7b4ec9d9258,network=Network(ead595d8-b047-43ed-b112-5f20fc88b00b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef12a070-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.101 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.101 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.102 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.108 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.108 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef12a070-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.109 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef12a070-08, col_values=(('external_ids', {'iface-id': 'ef12a070-08ba-4fda-8991-d7b4ec9d9258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:05:38', 'vm-uuid': 'b81109ce-a363-4a87-b75b-e20429154b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.111 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:55 compute-0 NetworkManager[57258]: <info>  [1771427755.1140] manager: (tapef12a070-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.114 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.121 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.122 189020 INFO os_vif [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:05:38,bridge_name='br-int',has_traffic_filtering=True,id=ef12a070-08ba-4fda-8991-d7b4ec9d9258,network=Network(ead595d8-b047-43ed-b112-5f20fc88b00b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef12a070-08')#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.175 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.176 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.176 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] No VIF found with MAC fa:16:3e:90:05:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.177 189020 INFO nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Using config drive#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.973 189020 INFO nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Creating config drive at /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk.config#033[00m
Feb 18 15:15:55 compute-0 nova_compute[189016]: 2026-02-18 15:15:55.980 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqwg6nltl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.104 189020 DEBUG oslo_concurrency.processutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqwg6nltl" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:56 compute-0 kernel: tapef12a070-08: entered promiscuous mode
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.190 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 NetworkManager[57258]: <info>  [1771427756.1924] manager: (tapef12a070-08): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb 18 15:15:56 compute-0 ovn_controller[99062]: 2026-02-18T15:15:56Z|00066|binding|INFO|Claiming lport ef12a070-08ba-4fda-8991-d7b4ec9d9258 for this chassis.
Feb 18 15:15:56 compute-0 ovn_controller[99062]: 2026-02-18T15:15:56Z|00067|binding|INFO|ef12a070-08ba-4fda-8991-d7b4ec9d9258: Claiming fa:16:3e:90:05:38 10.100.0.12
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.195 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.289 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.290 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:05:38 10.100.0.12'], port_security=['fa:16:3e:90:05:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b81109ce-a363-4a87-b75b-e20429154b94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ead595d8-b047-43ed-b112-5f20fc88b00b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8eb42d319554625b909271b1bba25e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38546452-7145-4d87-8436-5a22fd9216d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e9be8b6-764e-4621-ba73-5d46031005dc, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=ef12a070-08ba-4fda-8991-d7b4ec9d9258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.292 108400 INFO neutron.agent.ovn.metadata.agent [-] Port ef12a070-08ba-4fda-8991-d7b4ec9d9258 in datapath ead595d8-b047-43ed-b112-5f20fc88b00b bound to our chassis#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.294 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ead595d8-b047-43ed-b112-5f20fc88b00b#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.304 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 ovn_controller[99062]: 2026-02-18T15:15:56Z|00068|binding|INFO|Setting lport ef12a070-08ba-4fda-8991-d7b4ec9d9258 ovn-installed in OVS
Feb 18 15:15:56 compute-0 ovn_controller[99062]: 2026-02-18T15:15:56Z|00069|binding|INFO|Setting lport ef12a070-08ba-4fda-8991-d7b4ec9d9258 up in Southbound
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.307 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 systemd-machined[158361]: New machine qemu-6-instance-00000006.
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.318 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[49aafdfb-5964-4a43-ab70-5655170a445b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.319 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapead595d8-b1 in ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.321 242262 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapead595d8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.321 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[de23a002-fbc1-4c46-aeb9-c021d2c0e7b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.322 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[d1daee4d-466f-4c6c-89e7-4b6b879fd940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.337 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[9496c1f6-1e8a-48db-9dcf-30de2c4c030f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 systemd-udevd[252336]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.358 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[c3271e9b-fdf3-4752-8ae0-f2a3a2a715dd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 NetworkManager[57258]: <info>  [1771427756.3642] device (tapef12a070-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 15:15:56 compute-0 NetworkManager[57258]: <info>  [1771427756.3683] device (tapef12a070-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.393 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[e802fe64-21a4-4fd8-9d28-bdc12a1b4f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 NetworkManager[57258]: <info>  [1771427756.4043] manager: (tapead595d8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.402 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a7963a-43e4-4163-bafb-cf48184b2866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 systemd-udevd[252339]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.431 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb82cee-503e-4569-bc4d-38c195036ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.436 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[dfdb8125-d517-4c64-83bb-36ef6e08543f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 NetworkManager[57258]: <info>  [1771427756.4577] device (tapead595d8-b0): carrier: link connected
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.462 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb894ed-04df-4011-94f0-e891a41dee6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.477 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[49b5dff9-5eb4-44d1-943e-90d41e5a0b70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapead595d8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:b2:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485391, 'reachable_time': 24930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252367, 'error': None, 'target': 'ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.495 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2cffa-5887-4c7f-8f30-f018b4b52dc5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:b248'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485391, 'tstamp': 485391}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252368, 'error': None, 'target': 'ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.511 189020 DEBUG nova.network.neutron [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Successfully created port: 7514447c-f6a7-4670-817a-906ea6344789 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.514 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fd5ccd-1334-4c1d-9fe4-9f0ee37fcc14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapead595d8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:b2:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485391, 'reachable_time': 24930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252371, 'error': None, 'target': 'ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.519 189020 DEBUG nova.network.neutron [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Successfully created port: 2dcd7fbf-cb55-4dab-878d-1256e048fe07 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.553 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[381a7130-c279-4973-af16-be5c4194c305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.613 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[00412751-cf6c-44e0-85ae-ec08ccfb1715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.617 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapead595d8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.617 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.618 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapead595d8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:15:56 compute-0 NetworkManager[57258]: <info>  [1771427756.6239] manager: (tapead595d8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb 18 15:15:56 compute-0 kernel: tapead595d8-b0: entered promiscuous mode
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.622 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.626 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.628 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapead595d8-b0, col_values=(('external_ids', {'iface-id': '2191775f-019a-4a5f-9068-7077711276f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:15:56 compute-0 ovn_controller[99062]: 2026-02-18T15:15:56Z|00070|binding|INFO|Releasing lport 2191775f-019a-4a5f-9068-7077711276f5 from this chassis (sb_readonly=0)
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.630 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.632 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.633 108400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ead595d8-b047-43ed-b112-5f20fc88b00b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ead595d8-b047-43ed-b112-5f20fc88b00b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.635 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[1140a443-375a-4333-8393-d82039ee9b42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.636 108400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: global
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    log         /dev/log local0 debug
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    log-tag     haproxy-metadata-proxy-ead595d8-b047-43ed-b112-5f20fc88b00b
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    user        root
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    group       root
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    maxconn     1024
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    pidfile     /var/lib/neutron/external/pids/ead595d8-b047-43ed-b112-5f20fc88b00b.pid.haproxy
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    daemon
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: defaults
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    log global
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    mode http
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    option httplog
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    option dontlognull
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    option http-server-close
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    option forwardfor
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    retries                 3
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    timeout http-request    30s
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    timeout connect         30s
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    timeout client          32s
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    timeout server          32s
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    timeout http-keep-alive 30s
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: listen listener
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    bind 169.254.169.254:80
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    server metadata /var/lib/neutron/metadata_proxy
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]:    http-request add-header X-OVN-Network-ID ead595d8-b047-43ed-b112-5f20fc88b00b
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 18 15:15:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:15:56.637 108400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b', 'env', 'PROCESS_TAG=haproxy-ead595d8-b047-43ed-b112-5f20fc88b00b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ead595d8-b047-43ed-b112-5f20fc88b00b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 18 15:15:56 compute-0 systemd[1]: Starting libvirt proxy daemon...
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.641 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.654 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427756.6538405, b81109ce-a363-4a87-b75b-e20429154b94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.655 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] VM Started (Lifecycle Event)#033[00m
Feb 18 15:15:56 compute-0 systemd[1]: Started libvirt proxy daemon.
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.678 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.685 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427756.6540625, b81109ce-a363-4a87-b75b-e20429154b94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.685 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] VM Paused (Lifecycle Event)#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.714 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.720 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:15:56 compute-0 nova_compute[189016]: 2026-02-18 15:15:56.743 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:57 compute-0 podman[252426]: 2026-02-18 15:15:57.085379545 +0000 UTC m=+0.072442313 container create ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 15:15:57 compute-0 systemd[1]: Started libpod-conmon-ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138.scope.
Feb 18 15:15:57 compute-0 podman[252426]: 2026-02-18 15:15:57.04922125 +0000 UTC m=+0.036284038 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 15:15:57 compute-0 systemd[1]: Started libcrun container.
Feb 18 15:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ef38c5ddf5b225df378d426372296d42e860101f2b893344978cb63edadaefc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 15:15:57 compute-0 podman[252426]: 2026-02-18 15:15:57.203734336 +0000 UTC m=+0.190797154 container init ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:15:57 compute-0 podman[252426]: 2026-02-18 15:15:57.210634928 +0000 UTC m=+0.197697716 container start ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 18 15:15:57 compute-0 neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b[252442]: [NOTICE]   (252446) : New worker (252448) forked
Feb 18 15:15:57 compute-0 neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b[252442]: [NOTICE]   (252446) : Loading success.
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.410 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "538e968b-7f01-4e6b-af67-182df12fedec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.412 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.431 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.505 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.506 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.520 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.521 189020 INFO nova.compute.claims [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.631 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.960 189020 DEBUG nova.compute.provider_tree [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:15:57 compute-0 nova_compute[189016]: 2026-02-18 15:15:57.983 189020 DEBUG nova.scheduler.client.report [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.018 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.020 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.053 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.099 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.100 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.100 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.101 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.102 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.107 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.108 189020 DEBUG nova.network.neutron [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.134 189020 INFO nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.172 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.298 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.300 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.301 189020 INFO nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Creating image(s)#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.302 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "/var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.303 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "/var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.304 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "/var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.325 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.386 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.387 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "7b8f481705a33c6196332050fe7f03324002d985" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.388 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.398 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.463 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.464 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.506 189020 DEBUG nova.policy [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5092e33fb89a453bb8e6853648498f94', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e70a93fe3e61494488f1032883dfa661', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.511 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.512 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.512 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.582 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.583 189020 DEBUG nova.virt.disk.api [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Checking if we can resize image /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.584 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.643 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.645 189020 DEBUG nova.virt.disk.api [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Cannot resize image /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.645 189020 DEBUG nova.objects.instance [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lazy-loading 'migration_context' on Instance uuid 538e968b-7f01-4e6b-af67-182df12fedec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.662 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.663 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Ensure instance console log exists: /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.664 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.664 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.665 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.678 189020 DEBUG nova.compute.manager [req-c5f6ca57-f003-4db4-911d-d5e6df1c98ac req-fffe6442-93b3-4011-bbb0-64c39f37679f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received event network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.679 189020 DEBUG oslo_concurrency.lockutils [req-c5f6ca57-f003-4db4-911d-d5e6df1c98ac req-fffe6442-93b3-4011-bbb0-64c39f37679f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.680 189020 DEBUG oslo_concurrency.lockutils [req-c5f6ca57-f003-4db4-911d-d5e6df1c98ac req-fffe6442-93b3-4011-bbb0-64c39f37679f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.680 189020 DEBUG oslo_concurrency.lockutils [req-c5f6ca57-f003-4db4-911d-d5e6df1c98ac req-fffe6442-93b3-4011-bbb0-64c39f37679f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.680 189020 DEBUG nova.compute.manager [req-c5f6ca57-f003-4db4-911d-d5e6df1c98ac req-fffe6442-93b3-4011-bbb0-64c39f37679f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Processing event network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.682 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.689 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.690 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427758.688825, b81109ce-a363-4a87-b75b-e20429154b94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.690 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.698 189020 INFO nova.virt.libvirt.driver [-] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Instance spawned successfully.#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.699 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.717 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.727 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.730 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.730 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.731 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.731 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.732 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.732 189020 DEBUG nova.virt.libvirt.driver [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.763 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.792 189020 INFO nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Took 10.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.793 189020 DEBUG nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.863 189020 INFO nova.compute.manager [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Took 11.58 seconds to build instance.#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.888 189020 DEBUG oslo_concurrency.lockutils [None req-4dd758e2-1004-4b82-b315-41adc57d20eb 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.965 189020 DEBUG nova.network.neutron [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Successfully updated port: 7514447c-f6a7-4670-817a-906ea6344789 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.970 189020 DEBUG nova.network.neutron [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Successfully updated port: 2dcd7fbf-cb55-4dab-878d-1256e048fe07 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.986 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.987 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquired lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.987 189020 DEBUG nova.network.neutron [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.989 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.989 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquired lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:15:58 compute-0 nova_compute[189016]: 2026-02-18 15:15:58.989 189020 DEBUG nova.network.neutron [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:15:59 compute-0 nova_compute[189016]: 2026-02-18 15:15:59.533 189020 DEBUG nova.network.neutron [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Updated VIF entry in instance network info cache for port ef12a070-08ba-4fda-8991-d7b4ec9d9258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:15:59 compute-0 nova_compute[189016]: 2026-02-18 15:15:59.534 189020 DEBUG nova.network.neutron [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Updating instance_info_cache with network_info: [{"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:15:59 compute-0 nova_compute[189016]: 2026-02-18 15:15:59.550 189020 DEBUG oslo_concurrency.lockutils [req-ca951591-ab15-4000-bc5c-9cfcc1a08a5e req-a9f812d1-0f1f-4f0a-8c84-474a1589acff af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-b81109ce-a363-4a87-b75b-e20429154b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:15:59 compute-0 nova_compute[189016]: 2026-02-18 15:15:59.657 189020 DEBUG nova.network.neutron [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:15:59 compute-0 nova_compute[189016]: 2026-02-18 15:15:59.692 189020 DEBUG nova.network.neutron [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:15:59 compute-0 podman[204930]: time="2026-02-18T15:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:15:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29239 "" "Go-http-client/1.1"
Feb 18 15:15:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Feb 18 15:16:00 compute-0 nova_compute[189016]: 2026-02-18 15:16:00.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:00 compute-0 nova_compute[189016]: 2026-02-18 15:16:00.113 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:00 compute-0 nova_compute[189016]: 2026-02-18 15:16:00.400 189020 DEBUG nova.network.neutron [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Successfully created port: 00cd0913-0a46-457d-9b30-4007ec209a54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 18 15:16:00 compute-0 podman[252472]: 2026-02-18 15:16:00.749858735 +0000 UTC m=+0.074214248 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:16:00 compute-0 podman[252473]: 2026-02-18 15:16:00.782079691 +0000 UTC m=+0.106254869 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1770267347, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64)
Feb 18 15:16:01 compute-0 openstack_network_exporter[208107]: ERROR   15:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:16:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:16:01 compute-0 openstack_network_exporter[208107]: ERROR   15:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:16:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.472 189020 DEBUG nova.compute.manager [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received event network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.474 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.474 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.475 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.475 189020 DEBUG nova.compute.manager [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] No waiting events found dispatching network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.476 189020 WARNING nova.compute.manager [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received unexpected event network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.476 189020 DEBUG nova.compute.manager [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-changed-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.477 189020 DEBUG nova.compute.manager [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Refreshing instance network info cache due to event network-changed-7514447c-f6a7-4670-817a-906ea6344789. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.477 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:01 compute-0 nova_compute[189016]: 2026-02-18 15:16:01.978 189020 DEBUG nova.network.neutron [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updating instance_info_cache with network_info: [{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.005 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Releasing lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.006 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Instance network_info: |[{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.008 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.009 189020 DEBUG nova.network.neutron [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Refreshing network info cache for port 7514447c-f6a7-4670-817a-906ea6344789 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.013 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Start _get_guest_xml network_info=[{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.024 189020 WARNING nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.029 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.032 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.033 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.034 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.034 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.036 189020 INFO nova.compute.manager [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Terminating instance#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.037 189020 DEBUG nova.compute.manager [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.042 189020 DEBUG nova.virt.libvirt.host [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.043 189020 DEBUG nova.virt.libvirt.host [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.054 189020 DEBUG nova.virt.libvirt.host [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.055 189020 DEBUG nova.virt.libvirt.host [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.056 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.057 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T15:14:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1682e27b-a40b-4634-9ba2-5b28d38a8558',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.058 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.058 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.059 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.059 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.060 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.061 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.061 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.062 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.062 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.063 189020 DEBUG nova.virt.hardware [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:16:02 compute-0 kernel: tapef12a070-08 (unregistering): left promiscuous mode
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.069 189020 DEBUG nova.virt.libvirt.vif [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-81098757',display_name='tempest-ServerActionsTestJSON-server-81098757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-81098757',id=7,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJDCE2bGTc240mnDuFCQwo775RFg/uhDQhaw4PpU6AWyV4V2QOoLXLLRSspISvkSI6OMyoSPyUS6UNqASsDq57h8869z4QVr7NNF9GBrNesxXzdFiJ1rUKlUNDgICNGQQ==',key_name='tempest-keypair-238254365',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f278181458244cb4836c191782a17069',ramdisk_id='',reservation_id='r-0nnj0owd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-620119161',owner_user_name='tempest-ServerActionsTestJSON-620119161-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7476a1b8c814ab687793dcb836094b1',uuid=0914ee8e-421d-4e49-958e-4e659b7fdc22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.070 189020 DEBUG nova.network.os_vif_util [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converting VIF {"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.071 189020 DEBUG nova.network.os_vif_util [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.072 189020 DEBUG nova.objects.instance [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:16:02 compute-0 NetworkManager[57258]: <info>  [1771427762.0778] device (tapef12a070-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 18 15:16:02 compute-0 ovn_controller[99062]: 2026-02-18T15:16:02Z|00071|binding|INFO|Releasing lport ef12a070-08ba-4fda-8991-d7b4ec9d9258 from this chassis (sb_readonly=0)
Feb 18 15:16:02 compute-0 ovn_controller[99062]: 2026-02-18T15:16:02Z|00072|binding|INFO|Setting lport ef12a070-08ba-4fda-8991-d7b4ec9d9258 down in Southbound
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.089 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 ovn_controller[99062]: 2026-02-18T15:16:02Z|00073|binding|INFO|Removing iface tapef12a070-08 ovn-installed in OVS
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.098 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:05:38 10.100.0.12'], port_security=['fa:16:3e:90:05:38 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b81109ce-a363-4a87-b75b-e20429154b94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ead595d8-b047-43ed-b112-5f20fc88b00b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8eb42d319554625b909271b1bba25e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38546452-7145-4d87-8436-5a22fd9216d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6e9be8b6-764e-4621-ba73-5d46031005dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=ef12a070-08ba-4fda-8991-d7b4ec9d9258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.100 108400 INFO neutron.agent.ovn.metadata.agent [-] Port ef12a070-08ba-4fda-8991-d7b4ec9d9258 in datapath ead595d8-b047-43ed-b112-5f20fc88b00b unbound from our chassis#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.101 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <uuid>0914ee8e-421d-4e49-958e-4e659b7fdc22</uuid>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <name>instance-00000007</name>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <memory>131072</memory>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:name>tempest-ServerActionsTestJSON-server-81098757</nova:name>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:16:02</nova:creationTime>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:flavor name="m1.nano">
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:memory>128</nova:memory>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:ephemeral>0</nova:ephemeral>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:user uuid="d7476a1b8c814ab687793dcb836094b1">tempest-ServerActionsTestJSON-620119161-project-member</nova:user>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:project uuid="f278181458244cb4836c191782a17069">tempest-ServerActionsTestJSON-620119161</nova:project>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="3b4a4a6a-1650-453f-ba10-3bb16d71641c"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:port uuid="7514447c-f6a7-4670-817a-906ea6344789">
Feb 18 15:16:02 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <system>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="serial">0914ee8e-421d-4e49-958e-4e659b7fdc22</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="uuid">0914ee8e-421d-4e49-958e-4e659b7fdc22</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </system>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <os>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </os>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <features>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </features>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.config"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:34:24:0f"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <target dev="tap7514447c-f6"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </interface>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/console.log" append="off"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <video>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </video>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:16:02 compute-0 nova_compute[189016]: </domain>
Feb 18 15:16:02 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.102 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Preparing to wait for external event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.102 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.102 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.103 108400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ead595d8-b047-43ed-b112-5f20fc88b00b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.103 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.104 189020 DEBUG nova.virt.libvirt.vif [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-81098757',display_name='tempest-ServerActionsTestJSON-server-81098757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-81098757',id=7,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJDCE2bGTc240mnDuFCQwo775RFg/uhDQhaw4PpU6AWyV4V2QOoLXLLRSspISvkSI6OMyoSPyUS6UNqASsDq57h8869z4QVr7NNF9GBrNesxXzdFiJ1rUKlUNDgICNGQQ==',key_name='tempest-keypair-238254365',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f278181458244cb4836c191782a17069',ramdisk_id='',reservation_id='r-0nnj0owd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-620119161',owner_user_name='tempest-ServerActionsTestJSON-620119161-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7476a1b8c814ab687793dcb836094b1',uuid=0914ee8e-421d-4e49-958e-4e659b7fdc22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.104 189020 DEBUG nova.network.os_vif_util [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converting VIF {"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.104 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[0640d905-d831-45f8-93bc-1ba2cef7627f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.105 108400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b namespace which is not needed anymore#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.105 189020 DEBUG nova.network.os_vif_util [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.105 189020 DEBUG os_vif [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.106 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.107 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.107 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.109 189020 DEBUG nova.network.neutron [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Successfully updated port: 00cd0913-0a46-457d-9b30-4007ec209a54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.110 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.114 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.115 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7514447c-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.116 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7514447c-f6, col_values=(('external_ids', {'iface-id': '7514447c-f6a7-4670-817a-906ea6344789', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:24:0f', 'vm-uuid': '0914ee8e-421d-4e49-958e-4e659b7fdc22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.118 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 NetworkManager[57258]: <info>  [1771427762.1197] manager: (tap7514447c-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.120 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:16:02 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 18 15:16:02 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 4.024s CPU time.
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.125 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.126 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquired lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.126 189020 DEBUG nova.network.neutron [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:16:02 compute-0 systemd-machined[158361]: Machine qemu-6-instance-00000006 terminated.
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.129 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.131 189020 INFO os_vif [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6')#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.185 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.186 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.186 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] No VIF found with MAC fa:16:3e:34:24:0f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.187 189020 INFO nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Using config drive#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.238 189020 DEBUG nova.network.neutron [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Updating instance_info_cache with network_info: [{"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:02 compute-0 neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b[252442]: [NOTICE]   (252446) : haproxy version is 2.8.14-c23fe91
Feb 18 15:16:02 compute-0 neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b[252442]: [NOTICE]   (252446) : path to executable is /usr/sbin/haproxy
Feb 18 15:16:02 compute-0 neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b[252442]: [WARNING]  (252446) : Exiting Master process...
Feb 18 15:16:02 compute-0 neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b[252442]: [ALERT]    (252446) : Current worker (252448) exited with code 143 (Terminated)
Feb 18 15:16:02 compute-0 neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b[252442]: [WARNING]  (252446) : All workers exited. Exiting... (0)
Feb 18 15:16:02 compute-0 systemd[1]: libpod-ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138.scope: Deactivated successfully.
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.259 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Releasing lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.260 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Instance network_info: |[{"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 15:16:02 compute-0 podman[252542]: 2026-02-18 15:16:02.262083203 +0000 UTC m=+0.059431528 container died ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.262 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Start _get_guest_xml network_info=[{"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.285 189020 WARNING nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.295 189020 DEBUG nova.virt.libvirt.host [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.297 189020 DEBUG nova.virt.libvirt.host [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.305 189020 DEBUG nova.virt.libvirt.host [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.306 189020 DEBUG nova.virt.libvirt.host [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.307 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.307 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T15:14:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1682e27b-a40b-4634-9ba2-5b28d38a8558',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.307 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.307 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.308 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.308 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.308 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.308 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.308 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138-userdata-shm.mount: Deactivated successfully.
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.308 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.309 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.309 189020 DEBUG nova.virt.hardware [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ef38c5ddf5b225df378d426372296d42e860101f2b893344978cb63edadaefc-merged.mount: Deactivated successfully.
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.313 189020 DEBUG nova.virt.libvirt.vif [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1382464634',display_name='tempest-ServersTestJSON-server-1382464634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1382464634',id=8,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPV+VS8QDvIbZdoFg9k559rswy3U9pGEFTwfUSaJ3kBkhg0YzNdri1JEKtugebWlPyY+7VkYzwk1UXJfrufY1zRV9J4Xr8FVtQBFXYpI4MfGzJIep0jpozk42iTvnnJtZw==',key_name='tempest-keypair-374307044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b77ca7f6eb84bd0b8aa97e012cf6161',ramdisk_id='',reservation_id='r-6iu0onyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-813304290',owner_user_name='tempest-ServersTestJSON-813304290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93b35b58df3646ce83b1a741e5458aa4',uuid=df0877c8-0a3d-46d8-aa05-76d180a75938,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.313 189020 DEBUG nova.network.os_vif_util [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Converting VIF {"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.314 189020 DEBUG nova.network.os_vif_util [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:a6:7b,bridge_name='br-int',has_traffic_filtering=True,id=2dcd7fbf-cb55-4dab-878d-1256e048fe07,network=Network(9089fbbe-3179-4b8d-b7a6-10b908ca65c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dcd7fbf-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.315 189020 DEBUG nova.objects.instance [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lazy-loading 'pci_devices' on Instance uuid df0877c8-0a3d-46d8-aa05-76d180a75938 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:16:02 compute-0 podman[252542]: 2026-02-18 15:16:02.324038033 +0000 UTC m=+0.121386358 container cleanup ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.324 189020 INFO nova.virt.libvirt.driver [-] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Instance destroyed successfully.#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.325 189020 DEBUG nova.objects.instance [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lazy-loading 'resources' on Instance uuid b81109ce-a363-4a87-b75b-e20429154b94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.330 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <uuid>df0877c8-0a3d-46d8-aa05-76d180a75938</uuid>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <name>instance-00000008</name>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <memory>131072</memory>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:name>tempest-ServersTestJSON-server-1382464634</nova:name>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:16:02</nova:creationTime>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:flavor name="m1.nano">
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:memory>128</nova:memory>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:ephemeral>0</nova:ephemeral>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:user uuid="93b35b58df3646ce83b1a741e5458aa4">tempest-ServersTestJSON-813304290-project-member</nova:user>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:project uuid="1b77ca7f6eb84bd0b8aa97e012cf6161">tempest-ServersTestJSON-813304290</nova:project>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="3b4a4a6a-1650-453f-ba10-3bb16d71641c"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        <nova:port uuid="2dcd7fbf-cb55-4dab-878d-1256e048fe07">
Feb 18 15:16:02 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <system>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="serial">df0877c8-0a3d-46d8-aa05-76d180a75938</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="uuid">df0877c8-0a3d-46d8-aa05-76d180a75938</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </system>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <os>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </os>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <features>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </features>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk.config"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:ed:a6:7b"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <target dev="tap2dcd7fbf-cb"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </interface>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/console.log" append="off"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <video>
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </video>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:16:02 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:16:02 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:16:02 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:16:02 compute-0 nova_compute[189016]: </domain>
Feb 18 15:16:02 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.331 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Preparing to wait for external event network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.331 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.331 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.331 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.332 189020 DEBUG nova.virt.libvirt.vif [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1382464634',display_name='tempest-ServersTestJSON-server-1382464634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1382464634',id=8,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPV+VS8QDvIbZdoFg9k559rswy3U9pGEFTwfUSaJ3kBkhg0YzNdri1JEKtugebWlPyY+7VkYzwk1UXJfrufY1zRV9J4Xr8FVtQBFXYpI4MfGzJIep0jpozk42iTvnnJtZw==',key_name='tempest-keypair-374307044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b77ca7f6eb84bd0b8aa97e012cf6161',ramdisk_id='',reservation_id='r-6iu0onyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-813304290',owner_user_name='tempest-ServersTestJSON-813304290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93b35b58df3646ce83b1a741e5458aa4',uuid=df0877c8-0a3d-46d8-aa05-76d180a75938,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.332 189020 DEBUG nova.network.os_vif_util [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Converting VIF {"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.333 189020 DEBUG nova.network.os_vif_util [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:a6:7b,bridge_name='br-int',has_traffic_filtering=True,id=2dcd7fbf-cb55-4dab-878d-1256e048fe07,network=Network(9089fbbe-3179-4b8d-b7a6-10b908ca65c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dcd7fbf-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.333 189020 DEBUG os_vif [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:a6:7b,bridge_name='br-int',has_traffic_filtering=True,id=2dcd7fbf-cb55-4dab-878d-1256e048fe07,network=Network(9089fbbe-3179-4b8d-b7a6-10b908ca65c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dcd7fbf-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.333 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.334 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.334 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:16:02 compute-0 systemd[1]: libpod-conmon-ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138.scope: Deactivated successfully.
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.341 189020 DEBUG nova.virt.libvirt.vif [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1935075598',display_name='tempest-ServerAddressesTestJSON-server-1935075598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1935075598',id=6,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-18T15:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8eb42d319554625b909271b1bba25e5',ramdisk_id='',reservation_id='r-in1ix4fv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-13098154',owner_user_name='tempest-ServerAddressesTestJSON-13098154-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T15:15:58Z,user_data=None,user_id='90884ccf9a964b498da9370fd7f4bdce',uuid=b81109ce-a363-4a87-b75b-e20429154b94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.341 189020 DEBUG nova.network.os_vif_util [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Converting VIF {"id": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "address": "fa:16:3e:90:05:38", "network": {"id": "ead595d8-b047-43ed-b112-5f20fc88b00b", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1908621658-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8eb42d319554625b909271b1bba25e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef12a070-08", "ovs_interfaceid": "ef12a070-08ba-4fda-8991-d7b4ec9d9258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.343 189020 DEBUG nova.network.os_vif_util [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:05:38,bridge_name='br-int',has_traffic_filtering=True,id=ef12a070-08ba-4fda-8991-d7b4ec9d9258,network=Network(ead595d8-b047-43ed-b112-5f20fc88b00b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef12a070-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.344 189020 DEBUG os_vif [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:05:38,bridge_name='br-int',has_traffic_filtering=True,id=ef12a070-08ba-4fda-8991-d7b4ec9d9258,network=Network(ead595d8-b047-43ed-b112-5f20fc88b00b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef12a070-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.346 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.347 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dcd7fbf-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.348 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2dcd7fbf-cb, col_values=(('external_ids', {'iface-id': '2dcd7fbf-cb55-4dab-878d-1256e048fe07', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:a6:7b', 'vm-uuid': 'df0877c8-0a3d-46d8-aa05-76d180a75938'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 NetworkManager[57258]: <info>  [1771427762.3512] manager: (tap2dcd7fbf-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.350 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.351 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.362 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.364 189020 INFO os_vif [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:a6:7b,bridge_name='br-int',has_traffic_filtering=True,id=2dcd7fbf-cb55-4dab-878d-1256e048fe07,network=Network(9089fbbe-3179-4b8d-b7a6-10b908ca65c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dcd7fbf-cb')#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.366 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.366 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef12a070-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.369 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.370 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.387 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.391 189020 INFO os_vif [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:05:38,bridge_name='br-int',has_traffic_filtering=True,id=ef12a070-08ba-4fda-8991-d7b4ec9d9258,network=Network(ead595d8-b047-43ed-b112-5f20fc88b00b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef12a070-08')#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.392 189020 INFO nova.virt.libvirt.driver [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Deleting instance files /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94_del#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.392 189020 INFO nova.virt.libvirt.driver [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Deletion of /var/lib/nova/instances/b81109ce-a363-4a87-b75b-e20429154b94_del complete#033[00m
Feb 18 15:16:02 compute-0 podman[252589]: 2026-02-18 15:16:02.405311126 +0000 UTC m=+0.058598787 container remove ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.411 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[22ebb649-6f33-4ac1-a04b-c015e850c472]: (4, ('Wed Feb 18 03:16:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b (ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138)\nccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138\nWed Feb 18 03:16:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b (ccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138)\nccc9b0afdd1c46561d5966d5ca6bf32dc6507ab29a944ad5fdd3e339aabea138\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.414 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[88dfad89-2b03-429a-bc5d-f54976c9d0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.416 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapead595d8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.419 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 kernel: tapead595d8-b0: left promiscuous mode
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.432 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.436 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[63a03500-7d5d-4079-909e-eea953018a75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.437 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.438 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.438 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] No VIF found with MAC fa:16:3e:ed:a6:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.439 189020 INFO nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Using config drive#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.451 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce7e711-83d2-432b-909c-f4c11c1b7a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.453 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[51402482-3ba4-4fb3-a058-27bcf01c82db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.472 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2233e0-08c7-441c-a116-2de2c6bc3f59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485384, 'reachable_time': 31014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252615, 'error': None, 'target': 'ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 systemd[1]: run-netns-ovnmeta\x2dead595d8\x2db047\x2d43ed\x2db112\x2d5f20fc88b00b.mount: Deactivated successfully.
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.477 108948 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ead595d8-b047-43ed-b112-5f20fc88b00b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 18 15:16:02 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:02.477 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5c3cb1-b2b8-4591-a971-f5e1c9499e1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.485 189020 INFO nova.compute.manager [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.486 189020 DEBUG oslo.service.loopingcall [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.486 189020 DEBUG nova.compute.manager [-] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.486 189020 DEBUG nova.network.neutron [-] [instance: b81109ce-a363-4a87-b75b-e20429154b94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.527 189020 DEBUG nova.network.neutron [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:16:02 compute-0 nova_compute[189016]: 2026-02-18 15:16:02.634 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.165 189020 INFO nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Creating config drive at /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.config#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.170 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe2hovapa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.289 189020 DEBUG oslo_concurrency.processutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpe2hovapa" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:16:03 compute-0 NetworkManager[57258]: <info>  [1771427763.7006] manager: (tap7514447c-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Feb 18 15:16:03 compute-0 systemd-udevd[252522]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:16:03 compute-0 kernel: tap7514447c-f6: entered promiscuous mode
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.705 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:03 compute-0 ovn_controller[99062]: 2026-02-18T15:16:03Z|00074|binding|INFO|Claiming lport 7514447c-f6a7-4670-817a-906ea6344789 for this chassis.
Feb 18 15:16:03 compute-0 ovn_controller[99062]: 2026-02-18T15:16:03Z|00075|binding|INFO|7514447c-f6a7-4670-817a-906ea6344789: Claiming fa:16:3e:34:24:0f 10.100.0.8
Feb 18 15:16:03 compute-0 NetworkManager[57258]: <info>  [1771427763.7167] device (tap7514447c-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 15:16:03 compute-0 NetworkManager[57258]: <info>  [1771427763.7217] device (tap7514447c-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.723 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:24:0f 10.100.0.8'], port_security=['fa:16:3e:34:24:0f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0914ee8e-421d-4e49-958e-4e659b7fdc22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f278181458244cb4836c191782a17069', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa8824c1-6371-4fe5-a8cf-d0fe69d37717', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8327e11-8865-4094-ab67-65cb3aafaf73, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=7514447c-f6a7-4670-817a-906ea6344789) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.724 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 7514447c-f6a7-4670-817a-906ea6344789 in datapath 7799c1f7-b42b-4f46-a4f0-f189be986a35 bound to our chassis#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.727 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7799c1f7-b42b-4f46-a4f0-f189be986a35#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.736 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb272ec-b46e-4ae1-8613-5786e2774e96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 systemd-machined[158361]: New machine qemu-7-instance-00000007.
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.738 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7799c1f7-b1 in ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.739 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.741 242262 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7799c1f7-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.741 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa28ada-6e51-4d2b-b38f-a8c4afbe70d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.742 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[79c88644-01c6-42e5-98d6-2d1c998b0cd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 ovn_controller[99062]: 2026-02-18T15:16:03Z|00076|binding|INFO|Setting lport 7514447c-f6a7-4670-817a-906ea6344789 ovn-installed in OVS
Feb 18 15:16:03 compute-0 ovn_controller[99062]: 2026-02-18T15:16:03Z|00077|binding|INFO|Setting lport 7514447c-f6a7-4670-817a-906ea6344789 up in Southbound
Feb 18 15:16:03 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.751 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.753 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[e60000ca-0257-4070-8897-0503f70c8f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.795 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[c29c6bf1-cc00-46c3-9b38-7d324cb7f8e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.815 189020 INFO nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Creating config drive at /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk.config#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.822 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpibh47m5p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.827 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[042d29cc-c4e0-4e92-a076-eed0c2581321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.835 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[8d80b0a7-5a41-4782-ad2c-2be28bf18fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 NetworkManager[57258]: <info>  [1771427763.8365] manager: (tap7799c1f7-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.866 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[01734901-d1bf-4339-b0e0-e767d1951dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.869 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[690771a1-beb3-48ab-8d34-21ecf36be93d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 NetworkManager[57258]: <info>  [1771427763.8907] device (tap7799c1f7-b0): carrier: link connected
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.892 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3b5273-12cc-49c9-bb99-fb4fcb9c5338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.901 189020 DEBUG nova.compute.manager [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Received event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.902 189020 DEBUG nova.compute.manager [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing instance network info cache due to event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.902 189020 DEBUG oslo_concurrency.lockutils [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.910 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[777e94f2-61db-4eef-bec5-e295457f5cd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7799c1f7-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c2:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486134, 'reachable_time': 18368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252674, 'error': None, 'target': 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.925 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[de509ca6-a180-41db-901c-5be13ab16710]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:c2a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486134, 'tstamp': 486134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252675, 'error': None, 'target': 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.939 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b963693f-e58b-4617-b995-7db07a63044e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7799c1f7-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c2:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486134, 'reachable_time': 18368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252676, 'error': None, 'target': 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:03 compute-0 nova_compute[189016]: 2026-02-18 15:16:03.943 189020 DEBUG oslo_concurrency.processutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpibh47m5p" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:16:03 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:03.971 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7507fa-da0d-4fe8-8a66-fbb8f7dc2f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 systemd-udevd[252666]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:16:04 compute-0 kernel: tap2dcd7fbf-cb: entered promiscuous mode
Feb 18 15:16:04 compute-0 NetworkManager[57258]: <info>  [1771427764.0127] manager: (tap2dcd7fbf-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.016 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:04 compute-0 ovn_controller[99062]: 2026-02-18T15:16:04Z|00078|binding|INFO|Claiming lport 2dcd7fbf-cb55-4dab-878d-1256e048fe07 for this chassis.
Feb 18 15:16:04 compute-0 ovn_controller[99062]: 2026-02-18T15:16:04Z|00079|binding|INFO|2dcd7fbf-cb55-4dab-878d-1256e048fe07: Claiming fa:16:3e:ed:a6:7b 10.100.0.13
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.023 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:04 compute-0 NetworkManager[57258]: <info>  [1771427764.0305] device (tap2dcd7fbf-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 15:16:04 compute-0 NetworkManager[57258]: <info>  [1771427764.0347] device (tap2dcd7fbf-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.042 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:a6:7b 10.100.0.13'], port_security=['fa:16:3e:ed:a6:7b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'df0877c8-0a3d-46d8-aa05-76d180a75938', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b77ca7f6eb84bd0b8aa97e012cf6161', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa10ddf9-cda7-411c-941e-8993607f811b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e53fcb8-ef5e-4f73-94e4-08c795529883, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=2dcd7fbf-cb55-4dab-878d-1256e048fe07) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.042 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[047d8ec7-0b65-48cf-9c85-33157fa5d0d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.044 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7799c1f7-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.045 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.045 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7799c1f7-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:04 compute-0 systemd-machined[158361]: New machine qemu-8-instance-00000008.
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.048 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:04 compute-0 NetworkManager[57258]: <info>  [1771427764.0489] manager: (tap7799c1f7-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:04 compute-0 kernel: tap7799c1f7-b0: entered promiscuous mode
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.055 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.058 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7799c1f7-b0, col_values=(('external_ids', {'iface-id': 'b78157d5-9744-4676-ba4f-c8dc9c568cc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.061 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:04 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb 18 15:16:04 compute-0 ovn_controller[99062]: 2026-02-18T15:16:04Z|00080|binding|INFO|Setting lport 2dcd7fbf-cb55-4dab-878d-1256e048fe07 ovn-installed in OVS
Feb 18 15:16:04 compute-0 ovn_controller[99062]: 2026-02-18T15:16:04Z|00081|binding|INFO|Setting lport 2dcd7fbf-cb55-4dab-878d-1256e048fe07 up in Southbound
Feb 18 15:16:04 compute-0 ovn_controller[99062]: 2026-02-18T15:16:04Z|00082|binding|INFO|Releasing lport b78157d5-9744-4676-ba4f-c8dc9c568cc2 from this chassis (sb_readonly=1)
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.066 108400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7799c1f7-b42b-4f46-a4f0-f189be986a35.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7799c1f7-b42b-4f46-a4f0-f189be986a35.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.067 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[49fbf6ff-9e78-47b5-8e04-052812afc620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.069 108400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: global
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    log         /dev/log local0 debug
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    log-tag     haproxy-metadata-proxy-7799c1f7-b42b-4f46-a4f0-f189be986a35
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    user        root
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    group       root
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    maxconn     1024
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    pidfile     /var/lib/neutron/external/pids/7799c1f7-b42b-4f46-a4f0-f189be986a35.pid.haproxy
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    daemon
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: defaults
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    log global
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    mode http
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    option httplog
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    option dontlognull
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    option http-server-close
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    option forwardfor
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    retries                 3
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    timeout http-request    30s
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    timeout connect         30s
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    timeout client          32s
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    timeout server          32s
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    timeout http-keep-alive 30s
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: listen listener
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    bind 169.254.169.254:80
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    server metadata /var/lib/neutron/metadata_proxy
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]:    http-request add-header X-OVN-Network-ID 7799c1f7-b42b-4f46-a4f0-f189be986a35
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.070 108400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'env', 'PROCESS_TAG=haproxy-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7799c1f7-b42b-4f46-a4f0-f189be986a35.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.080 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.085 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.086 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.086 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.086 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.221 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.281 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.282 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.339 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.348 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.408 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.410 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:16:04 compute-0 podman[252735]: 2026-02-18 15:16:04.471044022 +0000 UTC m=+0.061888409 container create 192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.482 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:16:04 compute-0 podman[252735]: 2026-02-18 15:16:04.436254232 +0000 UTC m=+0.027098669 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 15:16:04 compute-0 systemd[1]: Started libpod-conmon-192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af.scope.
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.574 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427764.574044, df0877c8-0a3d-46d8-aa05-76d180a75938 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.575 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] VM Started (Lifecycle Event)#033[00m
Feb 18 15:16:04 compute-0 systemd[1]: Started libcrun container.
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.601 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3ddd6f17e1c0fa84715e9a004b58efc5965158d9dcbebaf2d6808f0e0b1527e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.614 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427764.5741878, df0877c8-0a3d-46d8-aa05-76d180a75938 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.615 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] VM Paused (Lifecycle Event)#033[00m
Feb 18 15:16:04 compute-0 podman[252735]: 2026-02-18 15:16:04.622426798 +0000 UTC m=+0.213271185 container init 192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:16:04 compute-0 podman[252735]: 2026-02-18 15:16:04.631538946 +0000 UTC m=+0.222383333 container start 192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.641 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.648 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:16:04 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[252762]: [NOTICE]   (252767) : New worker (252769) forked
Feb 18 15:16:04 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[252762]: [NOTICE]   (252767) : Loading success.
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.674 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.744 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 2dcd7fbf-cb55-4dab-878d-1256e048fe07 in datapath 9089fbbe-3179-4b8d-b7a6-10b908ca65c8 unbound from our chassis#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.754 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9089fbbe-3179-4b8d-b7a6-10b908ca65c8#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.765 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[7b90b6a0-b816-47c5-a8b1-91344cf7787f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.767 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9089fbbe-31 in ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.769 242262 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9089fbbe-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.769 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[6a330631-60c6-43e0-9890-7f12b72fccef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.770 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b063216b-b3bc-4b6f-b8f7-3253a819b385]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.781 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[8f20ee39-1679-41ca-9e28-47bea67a91e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.792 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[c74c5c38-7afc-4971-be13-8e8ec03f1c1d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.814 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[519867fb-5bc2-40b0-a176-92ffb5379021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.821 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[7163e9ee-1856-4c29-835e-baa48ded9aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 NetworkManager[57258]: <info>  [1771427764.8245] manager: (tap9089fbbe-30): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.833 189020 DEBUG nova.network.neutron [-] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.852 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[f3189be6-0d2f-4aab-bcdb-38e0e6935871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.855 189020 INFO nova.compute.manager [-] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Took 2.37 seconds to deallocate network for instance.#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.856 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[36caa341-28f7-4462-97a4-5e45b9f03505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.873 189020 DEBUG nova.compute.manager [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received event network-vif-unplugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.873 189020 DEBUG oslo_concurrency.lockutils [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.873 189020 DEBUG oslo_concurrency.lockutils [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.873 189020 DEBUG oslo_concurrency.lockutils [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.874 189020 DEBUG nova.compute.manager [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] No waiting events found dispatching network-vif-unplugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.874 189020 DEBUG nova.compute.manager [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received event network-vif-unplugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.874 189020 DEBUG nova.compute.manager [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received event network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.874 189020 DEBUG oslo_concurrency.lockutils [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "b81109ce-a363-4a87-b75b-e20429154b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.874 189020 DEBUG oslo_concurrency.lockutils [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.874 189020 DEBUG oslo_concurrency.lockutils [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.875 189020 DEBUG nova.compute.manager [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] No waiting events found dispatching network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.875 189020 WARNING nova.compute.manager [req-7f87f78d-d771-41b8-b7e6-f79853fd917f req-da532c05-f9d4-43de-87e4-b94000fa162b af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received unexpected event network-vif-plugged-ef12a070-08ba-4fda-8991-d7b4ec9d9258 for instance with vm_state active and task_state deleting.#033[00m
Feb 18 15:16:04 compute-0 NetworkManager[57258]: <info>  [1771427764.8790] device (tap9089fbbe-30): carrier: link connected
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.884 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf31049-d4cb-47f6-b9fc-dfebae81564b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.906 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b042d6d8-8e7c-4b9f-826a-21b045dfdf7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9089fbbe-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:0d:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486233, 'reachable_time': 21410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252788, 'error': None, 'target': 'ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.908 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:04 compute-0 nova_compute[189016]: 2026-02-18 15:16:04.909 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.927 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6646d9-647a-41c0-be18-10940aaa60fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:daa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486233, 'tstamp': 486233}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252790, 'error': None, 'target': 'ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.948 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a41bbd-8e53-4737-9f61-1808623a4fa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9089fbbe-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:0d:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486233, 'reachable_time': 21410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252793, 'error': None, 'target': 'ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:04 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:04.979 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[05466fb1-df1e-42cc-8ccf-fae05e0d999b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.028 189020 DEBUG nova.compute.provider_tree [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.030 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[102170e8-a26a-4357-a3ba-9d97e90201bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.036 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9089fbbe-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.036 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.037 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9089fbbe-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:05 compute-0 kernel: tap9089fbbe-30: entered promiscuous mode
Feb 18 15:16:05 compute-0 NetworkManager[57258]: <info>  [1771427765.0409] manager: (tap9089fbbe-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.039 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.044 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.047 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9089fbbe-30, col_values=(('external_ids', {'iface-id': '43e7172a-6c7b-4dec-b23f-fc0c94739d6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:05 compute-0 ovn_controller[99062]: 2026-02-18T15:16:05Z|00083|binding|INFO|Releasing lport 43e7172a-6c7b-4dec-b23f-fc0c94739d6c from this chassis (sb_readonly=0)
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.050 189020 DEBUG nova.scheduler.client.report [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.055 108400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9089fbbe-3179-4b8d-b7a6-10b908ca65c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9089fbbe-3179-4b8d-b7a6-10b908ca65c8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.056 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.056 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[bed312e8-dab8-456d-bbed-bdbde6c61db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.057 108400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: global
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    log         /dev/log local0 debug
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    log-tag     haproxy-metadata-proxy-9089fbbe-3179-4b8d-b7a6-10b908ca65c8
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    user        root
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    group       root
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    maxconn     1024
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    pidfile     /var/lib/neutron/external/pids/9089fbbe-3179-4b8d-b7a6-10b908ca65c8.pid.haproxy
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    daemon
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: defaults
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    log global
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    mode http
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    option httplog
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    option dontlognull
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    option http-server-close
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    option forwardfor
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    retries                 3
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    timeout http-request    30s
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    timeout connect         30s
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    timeout client          32s
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    timeout server          32s
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    timeout http-keep-alive 30s
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: listen listener
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    bind 169.254.169.254:80
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    server metadata /var/lib/neutron/metadata_proxy
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]:    http-request add-header X-OVN-Network-ID 9089fbbe-3179-4b8d-b7a6-10b908ca65c8
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 18 15:16:05 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:05.058 108400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'env', 'PROCESS_TAG=haproxy-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9089fbbe-3179-4b8d-b7a6-10b908ca65c8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.059 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427765.0558918, 0914ee8e-421d-4e49-958e-4e659b7fdc22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.060 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] VM Started (Lifecycle Event)#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.075 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.077 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5245MB free_disk=72.20602035522461GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.077 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.085 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.087 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.090 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.095 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427765.0564435, 0914ee8e-421d-4e49-958e-4e659b7fdc22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.096 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] VM Paused (Lifecycle Event)#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.128 189020 INFO nova.scheduler.client.report [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Deleted allocations for instance b81109ce-a363-4a87-b75b-e20429154b94#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.130 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.140 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.178 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.189 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 0914ee8e-421d-4e49-958e-4e659b7fdc22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.190 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance df0877c8-0a3d-46d8-aa05-76d180a75938 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.190 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 538e968b-7f01-4e6b-af67-182df12fedec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.190 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.191 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.214 189020 DEBUG oslo_concurrency.lockutils [None req-9ff5b14d-0abe-4cf4-92ce-35f293095352 90884ccf9a964b498da9370fd7f4bdce f8eb42d319554625b909271b1bba25e5 - - default default] Lock "b81109ce-a363-4a87-b75b-e20429154b94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.276 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.291 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.313 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:16:05 compute-0 nova_compute[189016]: 2026-02-18 15:16:05.313 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:05 compute-0 podman[252828]: 2026-02-18 15:16:05.473744755 +0000 UTC m=+0.064340541 container create 07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 18 15:16:05 compute-0 systemd[1]: Started libpod-conmon-07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33.scope.
Feb 18 15:16:05 compute-0 podman[252828]: 2026-02-18 15:16:05.444869802 +0000 UTC m=+0.035465588 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 15:16:05 compute-0 systemd[1]: Started libcrun container.
Feb 18 15:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dd966040ae4c25752430199ed38156e868d999239f5c44323a48afce6b14e83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 15:16:05 compute-0 podman[252828]: 2026-02-18 15:16:05.57148076 +0000 UTC m=+0.162076576 container init 07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 18 15:16:05 compute-0 podman[252828]: 2026-02-18 15:16:05.579315766 +0000 UTC m=+0.169911552 container start 07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 18 15:16:05 compute-0 neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8[252842]: [NOTICE]   (252846) : New worker (252848) forked
Feb 18 15:16:05 compute-0 neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8[252842]: [NOTICE]   (252846) : Loading success.
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.313 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.472 189020 DEBUG nova.network.neutron [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updated VIF entry in instance network info cache for port 7514447c-f6a7-4670-817a-906ea6344789. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.472 189020 DEBUG nova.network.neutron [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updating instance_info_cache with network_info: [{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.492 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.492 189020 DEBUG nova.compute.manager [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-changed-2dcd7fbf-cb55-4dab-878d-1256e048fe07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.492 189020 DEBUG nova.compute.manager [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Refreshing instance network info cache due to event network-changed-2dcd7fbf-cb55-4dab-878d-1256e048fe07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.493 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.493 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.493 189020 DEBUG nova.network.neutron [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Refreshing network info cache for port 2dcd7fbf-cb55-4dab-878d-1256e048fe07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.500 189020 DEBUG nova.network.neutron [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updating instance_info_cache with network_info: [{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.518 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Releasing lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.518 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Instance network_info: |[{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.518 189020 DEBUG oslo_concurrency.lockutils [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.519 189020 DEBUG nova.network.neutron [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.521 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Start _get_guest_xml network_info=[{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.528 189020 WARNING nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.537 189020 DEBUG nova.virt.libvirt.host [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.538 189020 DEBUG nova.virt.libvirt.host [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.542 189020 DEBUG nova.virt.libvirt.host [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.543 189020 DEBUG nova.virt.libvirt.host [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.543 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.543 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T15:14:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1682e27b-a40b-4634-9ba2-5b28d38a8558',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.544 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.544 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.544 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.544 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.544 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.545 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.545 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.545 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.545 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.546 189020 DEBUG nova.virt.hardware [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.549 189020 DEBUG nova.virt.libvirt.vif [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1891672357',display_name='tempest-AttachInterfacesUnderV243Test-server-1891672357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1891672357',id=9,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJZ4J4KtG6o9mXsTplYbZS+MGwyqI4cvMMExERewvSi60DZJlIcWW3Qsi8YW7EkNOAOOhP3gq3ayA5lBuqwIS4RYkGRazr1YFtOK0/iMfBy8ALwNGlJzJES2KXe2DKckg==',key_name='tempest-keypair-475234563',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e70a93fe3e61494488f1032883dfa661',ramdisk_id='',reservation_id='r-ocem2aka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2137751138',owner_user_name='tempest-AttachInterfacesUnderV243Test-2137751138-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5092e33fb89a453bb8e6853648498f94',uuid=538e968b-7f01-4e6b-af67-182df12fedec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.549 189020 DEBUG nova.network.os_vif_util [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Converting VIF {"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.550 189020 DEBUG nova.network.os_vif_util [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:08:4b,bridge_name='br-int',has_traffic_filtering=True,id=00cd0913-0a46-457d-9b30-4007ec209a54,network=Network(aa0dcae9-fb1d-4854-8c2f-bb40797fee0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00cd0913-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.550 189020 DEBUG nova.objects.instance [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lazy-loading 'pci_devices' on Instance uuid 538e968b-7f01-4e6b-af67-182df12fedec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.576 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <uuid>538e968b-7f01-4e6b-af67-182df12fedec</uuid>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <name>instance-00000009</name>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <memory>131072</memory>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1891672357</nova:name>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:16:06</nova:creationTime>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <nova:flavor name="m1.nano">
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:memory>128</nova:memory>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:ephemeral>0</nova:ephemeral>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:user uuid="5092e33fb89a453bb8e6853648498f94">tempest-AttachInterfacesUnderV243Test-2137751138-project-member</nova:user>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:project uuid="e70a93fe3e61494488f1032883dfa661">tempest-AttachInterfacesUnderV243Test-2137751138</nova:project>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="3b4a4a6a-1650-453f-ba10-3bb16d71641c"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        <nova:port uuid="00cd0913-0a46-457d-9b30-4007ec209a54">
Feb 18 15:16:06 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <system>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <entry name="serial">538e968b-7f01-4e6b-af67-182df12fedec</entry>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <entry name="uuid">538e968b-7f01-4e6b-af67-182df12fedec</entry>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </system>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <os>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  </os>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <features>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  </features>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk.config"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:94:08:4b"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <target dev="tap00cd0913-0a"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </interface>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/console.log" append="off"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <video>
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </video>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:16:06 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:16:06 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:16:06 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:16:06 compute-0 nova_compute[189016]: </domain>
Feb 18 15:16:06 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.577 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Preparing to wait for external event network-vif-plugged-00cd0913-0a46-457d-9b30-4007ec209a54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.577 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "538e968b-7f01-4e6b-af67-182df12fedec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.577 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.577 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.578 189020 DEBUG nova.virt.libvirt.vif [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-18T15:15:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1891672357',display_name='tempest-AttachInterfacesUnderV243Test-server-1891672357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1891672357',id=9,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAJZ4J4KtG6o9mXsTplYbZS+MGwyqI4cvMMExERewvSi60DZJlIcWW3Qsi8YW7EkNOAOOhP3gq3ayA5lBuqwIS4RYkGRazr1YFtOK0/iMfBy8ALwNGlJzJES2KXe2DKckg==',key_name='tempest-keypair-475234563',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e70a93fe3e61494488f1032883dfa661',ramdisk_id='',reservation_id='r-ocem2aka',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2137751138',owner_user_name='tempest-AttachInterfacesUnderV243Test-2137751138-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5092e33fb89a453bb8e6853648498f94',uuid=538e968b-7f01-4e6b-af67-182df12fedec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.578 189020 DEBUG nova.network.os_vif_util [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Converting VIF {"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.579 189020 DEBUG nova.network.os_vif_util [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:08:4b,bridge_name='br-int',has_traffic_filtering=True,id=00cd0913-0a46-457d-9b30-4007ec209a54,network=Network(aa0dcae9-fb1d-4854-8c2f-bb40797fee0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00cd0913-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.579 189020 DEBUG os_vif [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:08:4b,bridge_name='br-int',has_traffic_filtering=True,id=00cd0913-0a46-457d-9b30-4007ec209a54,network=Network(aa0dcae9-fb1d-4854-8c2f-bb40797fee0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00cd0913-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.579 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.580 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.580 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.583 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.583 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00cd0913-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.584 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00cd0913-0a, col_values=(('external_ids', {'iface-id': '00cd0913-0a46-457d-9b30-4007ec209a54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:08:4b', 'vm-uuid': '538e968b-7f01-4e6b-af67-182df12fedec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.586 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:06 compute-0 NetworkManager[57258]: <info>  [1771427766.5881] manager: (tap00cd0913-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.589 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.592 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.593 189020 INFO os_vif [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:08:4b,bridge_name='br-int',has_traffic_filtering=True,id=00cd0913-0a46-457d-9b30-4007ec209a54,network=Network(aa0dcae9-fb1d-4854-8c2f-bb40797fee0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00cd0913-0a')#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.646 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.647 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.648 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] No VIF found with MAC fa:16:3e:94:08:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.649 189020 INFO nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Using config drive#033[00m
Feb 18 15:16:06 compute-0 podman[252860]: 2026-02-18 15:16:06.741609512 +0000 UTC m=+0.067803377 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Feb 18 15:16:06 compute-0 podman[252861]: 2026-02-18 15:16:06.751254733 +0000 UTC m=+0.073686424 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Feb 18 15:16:06 compute-0 podman[252859]: 2026-02-18 15:16:06.757305075 +0000 UTC m=+0.083306685 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.849 189020 DEBUG nova.compute.manager [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.849 189020 DEBUG oslo_concurrency.lockutils [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.850 189020 DEBUG oslo_concurrency.lockutils [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.850 189020 DEBUG oslo_concurrency.lockutils [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.851 189020 DEBUG nova.compute.manager [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Processing event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.852 189020 DEBUG nova.compute.manager [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Received event network-vif-deleted-ef12a070-08ba-4fda-8991-d7b4ec9d9258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.853 189020 DEBUG nova.compute.manager [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.854 189020 DEBUG oslo_concurrency.lockutils [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.854 189020 DEBUG oslo_concurrency.lockutils [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.855 189020 DEBUG oslo_concurrency.lockutils [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.855 189020 DEBUG nova.compute.manager [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] No waiting events found dispatching network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.856 189020 WARNING nova.compute.manager [req-8a76c72b-313f-44f9-9ac0-b4164b16d1f6 req-845801f6-f29a-47dd-9728-6ad8974346ab af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received unexpected event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 for instance with vm_state building and task_state spawning.#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.858 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.864 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427766.8640928, 0914ee8e-421d-4e49-958e-4e659b7fdc22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.864 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.867 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.873 189020 INFO nova.virt.libvirt.driver [-] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Instance spawned successfully.#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.874 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.895 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.911 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.916 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.916 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.917 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.917 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.917 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.918 189020 DEBUG nova.virt.libvirt.driver [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:06 compute-0 nova_compute[189016]: 2026-02-18 15:16:06.953 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.009 189020 INFO nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Took 13.16 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.009 189020 DEBUG nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.110 189020 INFO nova.compute.manager [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Took 13.80 seconds to build instance.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.134 189020 DEBUG oslo_concurrency.lockutils [None req-9367205f-216c-43c1-8869-075dbe8f3fde d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.185 189020 DEBUG nova.compute.manager [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.185 189020 DEBUG oslo_concurrency.lockutils [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.186 189020 DEBUG oslo_concurrency.lockutils [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.186 189020 DEBUG oslo_concurrency.lockutils [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.186 189020 DEBUG nova.compute.manager [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Processing event network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.186 189020 DEBUG nova.compute.manager [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.187 189020 DEBUG oslo_concurrency.lockutils [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.187 189020 DEBUG oslo_concurrency.lockutils [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.187 189020 DEBUG oslo_concurrency.lockutils [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.187 189020 DEBUG nova.compute.manager [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] No waiting events found dispatching network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.188 189020 WARNING nova.compute.manager [req-fffc90bf-6959-4d7a-a9ec-6ef547be9421 req-bc5aea72-0b18-49e5-8c07-f6e895c85dac af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received unexpected event network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 for instance with vm_state building and task_state spawning.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.188 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.193 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427767.1925879, df0877c8-0a3d-46d8-aa05-76d180a75938 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.193 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.196 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.202 189020 INFO nova.virt.libvirt.driver [-] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Instance spawned successfully.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.202 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.221 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.231 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.237 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.237 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.238 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.238 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.238 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.239 189020 DEBUG nova.virt.libvirt.driver [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.258 189020 INFO nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Creating config drive at /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk.config#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.263 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp706o4l3p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.281 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.308 189020 INFO nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Took 13.17 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.309 189020 DEBUG nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.382 189020 INFO nova.compute.manager [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Took 14.02 seconds to build instance.#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.395 189020 DEBUG oslo_concurrency.processutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp706o4l3p" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.422 189020 DEBUG oslo_concurrency.lockutils [None req-f73b6a67-7670-4489-bbed-f7c9811ab270 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:07 compute-0 kernel: tap00cd0913-0a: entered promiscuous mode
Feb 18 15:16:07 compute-0 NetworkManager[57258]: <info>  [1771427767.4618] manager: (tap00cd0913-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Feb 18 15:16:07 compute-0 ovn_controller[99062]: 2026-02-18T15:16:07Z|00084|binding|INFO|Claiming lport 00cd0913-0a46-457d-9b30-4007ec209a54 for this chassis.
Feb 18 15:16:07 compute-0 ovn_controller[99062]: 2026-02-18T15:16:07Z|00085|binding|INFO|00cd0913-0a46-457d-9b30-4007ec209a54: Claiming fa:16:3e:94:08:4b 10.100.0.5
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.465 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.482 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:08:4b 10.100.0.5'], port_security=['fa:16:3e:94:08:4b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '538e968b-7f01-4e6b-af67-182df12fedec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e70a93fe3e61494488f1032883dfa661', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09aecdb9-59f2-4183-9b24-62b699a2bf6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=574fcf17-bc03-4a8d-9cae-ab8364ce298a, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=00cd0913-0a46-457d-9b30-4007ec209a54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.485 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 00cd0913-0a46-457d-9b30-4007ec209a54 in datapath aa0dcae9-fb1d-4854-8c2f-bb40797fee0c bound to our chassis#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.487 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.491 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa0dcae9-fb1d-4854-8c2f-bb40797fee0c#033[00m
Feb 18 15:16:07 compute-0 ovn_controller[99062]: 2026-02-18T15:16:07Z|00086|binding|INFO|Setting lport 00cd0913-0a46-457d-9b30-4007ec209a54 ovn-installed in OVS
Feb 18 15:16:07 compute-0 ovn_controller[99062]: 2026-02-18T15:16:07Z|00087|binding|INFO|Setting lport 00cd0913-0a46-457d-9b30-4007ec209a54 up in Southbound
Feb 18 15:16:07 compute-0 systemd-machined[158361]: New machine qemu-9-instance-00000009.
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.498 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.503 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b69229-df48-4388-92c2-7d3403d34660]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.505 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa0dcae9-f1 in ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 18 15:16:07 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.508 242262 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa0dcae9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.508 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[3665e6e6-6604-44a7-a6bf-4be4855ee8b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.510 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[f8faf44e-7a82-483a-9510-b14a82ee0430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.529 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[4229fbb2-3028-48b0-a691-bcbf96262757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 systemd-udevd[252937]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:16:07 compute-0 NetworkManager[57258]: <info>  [1771427767.5528] device (tap00cd0913-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 15:16:07 compute-0 NetworkManager[57258]: <info>  [1771427767.5575] device (tap00cd0913-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.562 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[02eca964-8d6e-40ec-888f-994f49a38154]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.587 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[55719a83-8978-485f-bd23-f0942a3aa40e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.594 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bd752f-7d17-4dff-bc42-64884700bfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 NetworkManager[57258]: <info>  [1771427767.5966] manager: (tapaa0dcae9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.629 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7d86e0-07ab-4a78-94b4-5c77687e15e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.633 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ec5253-cb6e-499b-beca-d422c41f11d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.637 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:07 compute-0 NetworkManager[57258]: <info>  [1771427767.6550] device (tapaa0dcae9-f0): carrier: link connected
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.660 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7a3766-7a2c-4285-9b20-a5517ea1ff3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.677 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[ab82eda0-ced3-4b46-b130-3ba759dff933]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa0dcae9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:fd:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486510, 'reachable_time': 36275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252968, 'error': None, 'target': 'ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.696 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[3b64f6b8-4031-4a0e-aa46-127d25222568]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:fdfe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 486510, 'tstamp': 486510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252969, 'error': None, 'target': 'ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.711 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[29f58e47-60f0-4e54-a739-4bb489c5a21e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa0dcae9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:fd:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486510, 'reachable_time': 36275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252970, 'error': None, 'target': 'ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.736 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3c147a-868f-44be-9a19-8521de30c637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.779 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5f9f46-4df1-4896-bbfb-5a7c96e1bf96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.781 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa0dcae9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.781 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.781 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa0dcae9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:07 compute-0 NetworkManager[57258]: <info>  [1771427767.7846] manager: (tapaa0dcae9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Feb 18 15:16:07 compute-0 kernel: tapaa0dcae9-f0: entered promiscuous mode
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.789 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa0dcae9-f0, col_values=(('external_ids', {'iface-id': '61d27581-bc51-4cfa-981d-b25d24632870'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.789 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:07 compute-0 ovn_controller[99062]: 2026-02-18T15:16:07Z|00088|binding|INFO|Releasing lport 61d27581-bc51-4cfa-981d-b25d24632870 from this chassis (sb_readonly=0)
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.793 108400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa0dcae9-fb1d-4854-8c2f-bb40797fee0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa0dcae9-fb1d-4854-8c2f-bb40797fee0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.794 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[eccde2ac-b3fd-454b-b17e-fd227abce578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.795 108400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: global
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    log         /dev/log local0 debug
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    log-tag     haproxy-metadata-proxy-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    user        root
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    group       root
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    maxconn     1024
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    pidfile     /var/lib/neutron/external/pids/aa0dcae9-fb1d-4854-8c2f-bb40797fee0c.pid.haproxy
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    daemon
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: defaults
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    log global
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    mode http
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    option httplog
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    option dontlognull
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    option http-server-close
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    option forwardfor
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    retries                 3
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    timeout http-request    30s
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    timeout connect         30s
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    timeout client          32s
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    timeout server          32s
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    timeout http-keep-alive 30s
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: listen listener
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    bind 169.254.169.254:80
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    server metadata /var/lib/neutron/metadata_proxy
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]:    http-request add-header X-OVN-Network-ID aa0dcae9-fb1d-4854-8c2f-bb40797fee0c
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 18 15:16:07 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:07.796 108400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c', 'env', 'PROCESS_TAG=haproxy-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa0dcae9-fb1d-4854-8c2f-bb40797fee0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 18 15:16:07 compute-0 nova_compute[189016]: 2026-02-18 15:16:07.798 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:08 compute-0 podman[253001]: 2026-02-18 15:16:08.244800885 +0000 UTC m=+0.078717349 container create b16312fa5cd54e73e1f5f40b220d8038af2be988bd3596ac773b426815ec5ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:16:08 compute-0 systemd[1]: Started libpod-conmon-b16312fa5cd54e73e1f5f40b220d8038af2be988bd3596ac773b426815ec5ea9.scope.
Feb 18 15:16:08 compute-0 podman[253001]: 2026-02-18 15:16:08.204410555 +0000 UTC m=+0.038327019 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 15:16:08 compute-0 systemd[1]: Started libcrun container.
Feb 18 15:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239433e5786dc3930d2c95acea3ae95180e0983c7922186f849283281460d89c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 15:16:08 compute-0 podman[253001]: 2026-02-18 15:16:08.354788217 +0000 UTC m=+0.188704701 container init b16312fa5cd54e73e1f5f40b220d8038af2be988bd3596ac773b426815ec5ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 18 15:16:08 compute-0 podman[253001]: 2026-02-18 15:16:08.362811617 +0000 UTC m=+0.196728091 container start b16312fa5cd54e73e1f5f40b220d8038af2be988bd3596ac773b426815ec5ea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 18 15:16:08 compute-0 neutron-haproxy-ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c[253014]: [NOTICE]   (253018) : New worker (253020) forked
Feb 18 15:16:08 compute-0 neutron-haproxy-ovnmeta-aa0dcae9-fb1d-4854-8c2f-bb40797fee0c[253014]: [NOTICE]   (253018) : Loading success.
Feb 18 15:16:08 compute-0 nova_compute[189016]: 2026-02-18 15:16:08.585 189020 DEBUG nova.network.neutron [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updated VIF entry in instance network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:16:08 compute-0 nova_compute[189016]: 2026-02-18 15:16:08.586 189020 DEBUG nova.network.neutron [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updating instance_info_cache with network_info: [{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:08 compute-0 nova_compute[189016]: 2026-02-18 15:16:08.605 189020 DEBUG oslo_concurrency.lockutils [req-a17510c0-98a2-4012-8290-2bd91356a243 req-d4dc2bc8-4e0f-4aee-a9a1-83ce82c05e44 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:08 compute-0 nova_compute[189016]: 2026-02-18 15:16:08.997 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427768.9966416, 538e968b-7f01-4e6b-af67-182df12fedec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:08 compute-0 nova_compute[189016]: 2026-02-18 15:16:08.999 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] VM Started (Lifecycle Event)#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.025 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.032 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427768.9967682, 538e968b-7f01-4e6b-af67-182df12fedec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.033 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] VM Paused (Lifecycle Event)#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.060 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.066 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.082 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.342 189020 DEBUG nova.compute.manager [req-cc154bac-7c0a-467d-80d6-a073084c5eed req-787357ad-ee6f-4d1d-947c-a5c243475fe2 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Received event network-vif-plugged-00cd0913-0a46-457d-9b30-4007ec209a54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.343 189020 DEBUG oslo_concurrency.lockutils [req-cc154bac-7c0a-467d-80d6-a073084c5eed req-787357ad-ee6f-4d1d-947c-a5c243475fe2 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "538e968b-7f01-4e6b-af67-182df12fedec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.344 189020 DEBUG oslo_concurrency.lockutils [req-cc154bac-7c0a-467d-80d6-a073084c5eed req-787357ad-ee6f-4d1d-947c-a5c243475fe2 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.347 189020 DEBUG oslo_concurrency.lockutils [req-cc154bac-7c0a-467d-80d6-a073084c5eed req-787357ad-ee6f-4d1d-947c-a5c243475fe2 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.347 189020 DEBUG nova.compute.manager [req-cc154bac-7c0a-467d-80d6-a073084c5eed req-787357ad-ee6f-4d1d-947c-a5c243475fe2 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Processing event network-vif-plugged-00cd0913-0a46-457d-9b30-4007ec209a54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.349 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.367 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427769.3540366, 538e968b-7f01-4e6b-af67-182df12fedec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.382 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.386 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.394 189020 INFO nova.virt.libvirt.driver [-] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Instance spawned successfully.#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.395 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.399 189020 DEBUG nova.network.neutron [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Updated VIF entry in instance network info cache for port 2dcd7fbf-cb55-4dab-878d-1256e048fe07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.400 189020 DEBUG nova.network.neutron [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Updating instance_info_cache with network_info: [{"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.404 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.412 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.424 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.425 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.425 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.426 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.426 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.427 189020 DEBUG nova.virt.libvirt.driver [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.434 189020 DEBUG oslo_concurrency.lockutils [req-84818fcf-bffa-416c-b5ac-07f4aa244982 req-0bca408b-a6e5-480e-9c11-a2e0dd8320bc af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.435 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.495 189020 INFO nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Took 11.20 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.496 189020 DEBUG nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.566 189020 INFO nova.compute.manager [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Took 12.09 seconds to build instance.#033[00m
Feb 18 15:16:09 compute-0 nova_compute[189016]: 2026-02-18 15:16:09.584 189020 DEBUG oslo_concurrency.lockutils [None req-04d08b63-5447-40c2-b7e6-22eda8ad5ecf 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:11 compute-0 NetworkManager[57258]: <info>  [1771427771.0969] manager: (patch-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb 18 15:16:11 compute-0 NetworkManager[57258]: <info>  [1771427771.0977] manager: (patch-br-int-to-provnet-947cd5ec-5d87-4629-9a2b-88c4cca86a0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.104 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.120 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:11 compute-0 ovn_controller[99062]: 2026-02-18T15:16:11Z|00089|binding|INFO|Releasing lport b78157d5-9744-4676-ba4f-c8dc9c568cc2 from this chassis (sb_readonly=0)
Feb 18 15:16:11 compute-0 ovn_controller[99062]: 2026-02-18T15:16:11Z|00090|binding|INFO|Releasing lport 61d27581-bc51-4cfa-981d-b25d24632870 from this chassis (sb_readonly=0)
Feb 18 15:16:11 compute-0 ovn_controller[99062]: 2026-02-18T15:16:11Z|00091|binding|INFO|Releasing lport 43e7172a-6c7b-4dec-b23f-fc0c94739d6c from this chassis (sb_readonly=0)
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.148 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.156 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.587 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.690 189020 DEBUG nova.compute.manager [req-b87d32c9-5c95-45f4-bb2c-9c1c6296b783 req-59a134a2-6d14-4cf6-b56a-305f583c3c77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Received event network-vif-plugged-00cd0913-0a46-457d-9b30-4007ec209a54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.690 189020 DEBUG oslo_concurrency.lockutils [req-b87d32c9-5c95-45f4-bb2c-9c1c6296b783 req-59a134a2-6d14-4cf6-b56a-305f583c3c77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "538e968b-7f01-4e6b-af67-182df12fedec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.691 189020 DEBUG oslo_concurrency.lockutils [req-b87d32c9-5c95-45f4-bb2c-9c1c6296b783 req-59a134a2-6d14-4cf6-b56a-305f583c3c77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.691 189020 DEBUG oslo_concurrency.lockutils [req-b87d32c9-5c95-45f4-bb2c-9c1c6296b783 req-59a134a2-6d14-4cf6-b56a-305f583c3c77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "538e968b-7f01-4e6b-af67-182df12fedec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.692 189020 DEBUG nova.compute.manager [req-b87d32c9-5c95-45f4-bb2c-9c1c6296b783 req-59a134a2-6d14-4cf6-b56a-305f583c3c77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] No waiting events found dispatching network-vif-plugged-00cd0913-0a46-457d-9b30-4007ec209a54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.692 189020 WARNING nova.compute.manager [req-b87d32c9-5c95-45f4-bb2c-9c1c6296b783 req-59a134a2-6d14-4cf6-b56a-305f583c3c77 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Received unexpected event network-vif-plugged-00cd0913-0a46-457d-9b30-4007ec209a54 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:16:11 compute-0 ovn_controller[99062]: 2026-02-18T15:16:11Z|00092|binding|INFO|Releasing lport b78157d5-9744-4676-ba4f-c8dc9c568cc2 from this chassis (sb_readonly=0)
Feb 18 15:16:11 compute-0 ovn_controller[99062]: 2026-02-18T15:16:11Z|00093|binding|INFO|Releasing lport 61d27581-bc51-4cfa-981d-b25d24632870 from this chassis (sb_readonly=0)
Feb 18 15:16:11 compute-0 ovn_controller[99062]: 2026-02-18T15:16:11Z|00094|binding|INFO|Releasing lport 43e7172a-6c7b-4dec-b23f-fc0c94739d6c from this chassis (sb_readonly=0)
Feb 18 15:16:11 compute-0 nova_compute[189016]: 2026-02-18 15:16:11.987 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:12 compute-0 nova_compute[189016]: 2026-02-18 15:16:12.436 189020 DEBUG nova.compute.manager [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-changed-2dcd7fbf-cb55-4dab-878d-1256e048fe07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:12 compute-0 nova_compute[189016]: 2026-02-18 15:16:12.437 189020 DEBUG nova.compute.manager [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Refreshing instance network info cache due to event network-changed-2dcd7fbf-cb55-4dab-878d-1256e048fe07. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:16:12 compute-0 nova_compute[189016]: 2026-02-18 15:16:12.437 189020 DEBUG oslo_concurrency.lockutils [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:12 compute-0 nova_compute[189016]: 2026-02-18 15:16:12.438 189020 DEBUG oslo_concurrency.lockutils [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:12 compute-0 nova_compute[189016]: 2026-02-18 15:16:12.438 189020 DEBUG nova.network.neutron [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Refreshing network info cache for port 2dcd7fbf-cb55-4dab-878d-1256e048fe07 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:16:12 compute-0 nova_compute[189016]: 2026-02-18 15:16:12.641 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.028 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.028 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.029 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.029 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.030 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.031 189020 INFO nova.compute.manager [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Terminating instance#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.032 189020 DEBUG nova.compute.manager [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:13 compute-0 kernel: tap2dcd7fbf-cb (unregistering): left promiscuous mode
Feb 18 15:16:13 compute-0 NetworkManager[57258]: <info>  [1771427773.0654] device (tap2dcd7fbf-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.076 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 ovn_controller[99062]: 2026-02-18T15:16:13Z|00095|binding|INFO|Releasing lport 2dcd7fbf-cb55-4dab-878d-1256e048fe07 from this chassis (sb_readonly=0)
Feb 18 15:16:13 compute-0 ovn_controller[99062]: 2026-02-18T15:16:13Z|00096|binding|INFO|Setting lport 2dcd7fbf-cb55-4dab-878d-1256e048fe07 down in Southbound
Feb 18 15:16:13 compute-0 ovn_controller[99062]: 2026-02-18T15:16:13Z|00097|binding|INFO|Removing iface tap2dcd7fbf-cb ovn-installed in OVS
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.095 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.098 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:a6:7b 10.100.0.13'], port_security=['fa:16:3e:ed:a6:7b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'df0877c8-0a3d-46d8-aa05-76d180a75938', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b77ca7f6eb84bd0b8aa97e012cf6161', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa10ddf9-cda7-411c-941e-8993607f811b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e53fcb8-ef5e-4f73-94e4-08c795529883, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=2dcd7fbf-cb55-4dab-878d-1256e048fe07) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.099 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 2dcd7fbf-cb55-4dab-878d-1256e048fe07 in datapath 9089fbbe-3179-4b8d-b7a6-10b908ca65c8 unbound from our chassis#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.101 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.107 108400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9089fbbe-3179-4b8d-b7a6-10b908ca65c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 18 15:16:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 18 15:16:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 6.499s CPU time.
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.112 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3a3fc0-40ba-4f9b-bcb6-e8966af897f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.113 108400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8 namespace which is not needed anymore#033[00m
Feb 18 15:16:13 compute-0 systemd-machined[158361]: Machine qemu-8-instance-00000008 terminated.
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.291 189020 INFO nova.virt.libvirt.driver [-] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Instance destroyed successfully.#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.292 189020 DEBUG nova.objects.instance [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lazy-loading 'resources' on Instance uuid df0877c8-0a3d-46d8-aa05-76d180a75938 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:16:13 compute-0 neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8[252842]: [NOTICE]   (252846) : haproxy version is 2.8.14-c23fe91
Feb 18 15:16:13 compute-0 neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8[252842]: [NOTICE]   (252846) : path to executable is /usr/sbin/haproxy
Feb 18 15:16:13 compute-0 neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8[252842]: [WARNING]  (252846) : Exiting Master process...
Feb 18 15:16:13 compute-0 neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8[252842]: [ALERT]    (252846) : Current worker (252848) exited with code 143 (Terminated)
Feb 18 15:16:13 compute-0 neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8[252842]: [WARNING]  (252846) : All workers exited. Exiting... (0)
Feb 18 15:16:13 compute-0 systemd[1]: libpod-07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33.scope: Deactivated successfully.
Feb 18 15:16:13 compute-0 conmon[252842]: conmon 07d79b6b9dbc97581aa0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33.scope/container/memory.events
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.314 189020 DEBUG nova.virt.libvirt.vif [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1382464634',display_name='tempest-ServersTestJSON-server-1382464634',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1382464634',id=8,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPV+VS8QDvIbZdoFg9k559rswy3U9pGEFTwfUSaJ3kBkhg0YzNdri1JEKtugebWlPyY+7VkYzwk1UXJfrufY1zRV9J4Xr8FVtQBFXYpI4MfGzJIep0jpozk42iTvnnJtZw==',key_name='tempest-keypair-374307044',keypairs=<?>,launch_index=0,launched_at=2026-02-18T15:16:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b77ca7f6eb84bd0b8aa97e012cf6161',ramdisk_id='',reservation_id='r-6iu0onyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-813304290',owner_user_name='tempest-ServersTestJSON-813304290-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T15:16:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93b35b58df3646ce83b1a741e5458aa4',uuid=df0877c8-0a3d-46d8-aa05-76d180a75938,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.315 189020 DEBUG nova.network.os_vif_util [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Converting VIF {"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.316 189020 DEBUG nova.network.os_vif_util [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:a6:7b,bridge_name='br-int',has_traffic_filtering=True,id=2dcd7fbf-cb55-4dab-878d-1256e048fe07,network=Network(9089fbbe-3179-4b8d-b7a6-10b908ca65c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dcd7fbf-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.317 189020 DEBUG os_vif [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:a6:7b,bridge_name='br-int',has_traffic_filtering=True,id=2dcd7fbf-cb55-4dab-878d-1256e048fe07,network=Network(9089fbbe-3179-4b8d-b7a6-10b908ca65c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dcd7fbf-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 18 15:16:13 compute-0 podman[253062]: 2026-02-18 15:16:13.319492252 +0000 UTC m=+0.084614347 container died 07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.322 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.322 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dcd7fbf-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.325 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.329 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.334 189020 INFO os_vif [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:a6:7b,bridge_name='br-int',has_traffic_filtering=True,id=2dcd7fbf-cb55-4dab-878d-1256e048fe07,network=Network(9089fbbe-3179-4b8d-b7a6-10b908ca65c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2dcd7fbf-cb')#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.335 189020 INFO nova.virt.libvirt.driver [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Deleting instance files /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938_del#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.336 189020 INFO nova.virt.libvirt.driver [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Deletion of /var/lib/nova/instances/df0877c8-0a3d-46d8-aa05-76d180a75938_del complete#033[00m
Feb 18 15:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33-userdata-shm.mount: Deactivated successfully.
Feb 18 15:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dd966040ae4c25752430199ed38156e868d999239f5c44323a48afce6b14e83-merged.mount: Deactivated successfully.
Feb 18 15:16:13 compute-0 podman[253062]: 2026-02-18 15:16:13.381449442 +0000 UTC m=+0.146571537 container cleanup 07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.391 189020 INFO nova.compute.manager [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.392 189020 DEBUG oslo.service.loopingcall [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.393 189020 DEBUG nova.compute.manager [-] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.393 189020 DEBUG nova.network.neutron [-] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 18 15:16:13 compute-0 systemd[1]: libpod-conmon-07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33.scope: Deactivated successfully.
Feb 18 15:16:13 compute-0 podman[253107]: 2026-02-18 15:16:13.467005843 +0000 UTC m=+0.062381482 container remove 07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.473 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[901cd764-e421-416e-aea2-2e228ee575b5]: (4, ('Wed Feb 18 03:16:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8 (07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33)\n07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33\nWed Feb 18 03:16:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8 (07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33)\n07d79b6b9dbc97581aa0cd7507d4b83638429313f635d4eaa490c03cc5e78d33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.474 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[2a95829e-a2d5-472d-b576-141465ec9336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.475 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9089fbbe-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.478 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 kernel: tap9089fbbe-30: left promiscuous mode
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.482 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.484 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b229d4ef-f32d-4ae3-befc-4a3bc8e64767]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.494 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.499 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[38dd3d70-8ac7-4aaa-871a-219b5eb4fc4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.501 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[efded8b0-119e-46b3-b2db-7b9ede9d600a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.515 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[f4219391-6939-467d-afbb-f4c5cad9563c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486226, 'reachable_time': 31142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253119, 'error': None, 'target': 'ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.519 108948 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9089fbbe-3179-4b8d-b7a6-10b908ca65c8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 18 15:16:13 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:13.519 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fea195-ea84-448f-aec1-5e79b28edc81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:16:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d9089fbbe\x2d3179\x2d4b8d\x2db7a6\x2d10b908ca65c8.mount: Deactivated successfully.
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.993 189020 DEBUG nova.compute.manager [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-changed-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.994 189020 DEBUG nova.compute.manager [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Refreshing instance network info cache due to event network-changed-7514447c-f6a7-4670-817a-906ea6344789. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.994 189020 DEBUG oslo_concurrency.lockutils [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.994 189020 DEBUG oslo_concurrency.lockutils [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:13 compute-0 nova_compute[189016]: 2026-02-18 15:16:13.994 189020 DEBUG nova.network.neutron [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Refreshing network info cache for port 7514447c-f6a7-4670-817a-906ea6344789 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.330 189020 DEBUG nova.network.neutron [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Updated VIF entry in instance network info cache for port 2dcd7fbf-cb55-4dab-878d-1256e048fe07. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.330 189020 DEBUG nova.network.neutron [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Updating instance_info_cache with network_info: [{"id": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "address": "fa:16:3e:ed:a6:7b", "network": {"id": "9089fbbe-3179-4b8d-b7a6-10b908ca65c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1534667452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b77ca7f6eb84bd0b8aa97e012cf6161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2dcd7fbf-cb", "ovs_interfaceid": "2dcd7fbf-cb55-4dab-878d-1256e048fe07", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.381 189020 DEBUG oslo_concurrency.lockutils [req-ba811d84-4ef6-4e00-b641-4c7e63115cc7 req-d6ee044b-59d2-4fc8-9f50-1d1301b4312f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-df0877c8-0a3d-46d8-aa05-76d180a75938" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.646 189020 DEBUG nova.compute.manager [req-6f136a00-3a41-4661-bf0e-5c1e8291ca29 req-160f9338-4398-460a-8959-dc80c10b1b89 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-vif-unplugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.646 189020 DEBUG oslo_concurrency.lockutils [req-6f136a00-3a41-4661-bf0e-5c1e8291ca29 req-160f9338-4398-460a-8959-dc80c10b1b89 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.646 189020 DEBUG oslo_concurrency.lockutils [req-6f136a00-3a41-4661-bf0e-5c1e8291ca29 req-160f9338-4398-460a-8959-dc80c10b1b89 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.647 189020 DEBUG oslo_concurrency.lockutils [req-6f136a00-3a41-4661-bf0e-5c1e8291ca29 req-160f9338-4398-460a-8959-dc80c10b1b89 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.647 189020 DEBUG nova.compute.manager [req-6f136a00-3a41-4661-bf0e-5c1e8291ca29 req-160f9338-4398-460a-8959-dc80c10b1b89 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] No waiting events found dispatching network-vif-unplugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:14 compute-0 nova_compute[189016]: 2026-02-18 15:16:14.647 189020 DEBUG nova.compute.manager [req-6f136a00-3a41-4661-bf0e-5c1e8291ca29 req-160f9338-4398-460a-8959-dc80c10b1b89 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-vif-unplugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 18 15:16:14 compute-0 podman[253122]: 2026-02-18 15:16:14.762800668 +0000 UTC m=+0.079139171 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 18 15:16:14 compute-0 podman[253121]: 2026-02-18 15:16:14.793353883 +0000 UTC m=+0.109656735 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:16:15 compute-0 nova_compute[189016]: 2026-02-18 15:16:15.769 189020 DEBUG nova.network.neutron [-] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:15 compute-0 nova_compute[189016]: 2026-02-18 15:16:15.796 189020 INFO nova.compute.manager [-] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Took 2.40 seconds to deallocate network for instance.#033[00m
Feb 18 15:16:15 compute-0 nova_compute[189016]: 2026-02-18 15:16:15.855 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:15 compute-0 nova_compute[189016]: 2026-02-18 15:16:15.856 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:15 compute-0 nova_compute[189016]: 2026-02-18 15:16:15.977 189020 DEBUG nova.compute.provider_tree [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:16:15 compute-0 nova_compute[189016]: 2026-02-18 15:16:15.995 189020 DEBUG nova.scheduler.client.report [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.025 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.046 189020 INFO nova.scheduler.client.report [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Deleted allocations for instance df0877c8-0a3d-46d8-aa05-76d180a75938#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.148 189020 DEBUG oslo_concurrency.lockutils [None req-e817f767-b4fd-472d-b75b-50c3230583fb 93b35b58df3646ce83b1a741e5458aa4 1b77ca7f6eb84bd0b8aa97e012cf6161 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.868 189020 DEBUG nova.compute.manager [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.870 189020 DEBUG oslo_concurrency.lockutils [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.870 189020 DEBUG oslo_concurrency.lockutils [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.871 189020 DEBUG oslo_concurrency.lockutils [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "df0877c8-0a3d-46d8-aa05-76d180a75938-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.871 189020 DEBUG nova.compute.manager [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] No waiting events found dispatching network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.872 189020 WARNING nova.compute.manager [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received unexpected event network-vif-plugged-2dcd7fbf-cb55-4dab-878d-1256e048fe07 for instance with vm_state deleted and task_state None.#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.872 189020 DEBUG nova.compute.manager [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Received event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.873 189020 DEBUG nova.compute.manager [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing instance network info cache due to event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.873 189020 DEBUG oslo_concurrency.lockutils [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.874 189020 DEBUG oslo_concurrency.lockutils [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:16 compute-0 nova_compute[189016]: 2026-02-18 15:16:16.875 189020 DEBUG nova.network.neutron [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:16:17 compute-0 nova_compute[189016]: 2026-02-18 15:16:17.242 189020 DEBUG nova.network.neutron [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updated VIF entry in instance network info cache for port 7514447c-f6a7-4670-817a-906ea6344789. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:16:17 compute-0 nova_compute[189016]: 2026-02-18 15:16:17.243 189020 DEBUG nova.network.neutron [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updating instance_info_cache with network_info: [{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:17 compute-0 nova_compute[189016]: 2026-02-18 15:16:17.265 189020 DEBUG oslo_concurrency.lockutils [req-cef04492-4ffd-4edd-92f6-0290f65db8a6 req-cae7e3be-1dc7-400c-b71f-5b3225d06340 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:17 compute-0 nova_compute[189016]: 2026-02-18 15:16:17.317 189020 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771427762.3117218, b81109ce-a363-4a87-b75b-e20429154b94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:17 compute-0 nova_compute[189016]: 2026-02-18 15:16:17.318 189020 INFO nova.compute.manager [-] [instance: b81109ce-a363-4a87-b75b-e20429154b94] VM Stopped (Lifecycle Event)#033[00m
Feb 18 15:16:17 compute-0 nova_compute[189016]: 2026-02-18 15:16:17.341 189020 DEBUG nova.compute.manager [None req-80aee6b7-39e2-4c41-924e-301220ad68a7 - - - - - -] [instance: b81109ce-a363-4a87-b75b-e20429154b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:17 compute-0 nova_compute[189016]: 2026-02-18 15:16:17.644 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:18 compute-0 nova_compute[189016]: 2026-02-18 15:16:18.327 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:18 compute-0 podman[253163]: 2026-02-18 15:16:18.794278757 +0000 UTC m=+0.120708030 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 18 15:16:19 compute-0 nova_compute[189016]: 2026-02-18 15:16:19.468 189020 DEBUG nova.network.neutron [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updated VIF entry in instance network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:16:19 compute-0 nova_compute[189016]: 2026-02-18 15:16:19.469 189020 DEBUG nova.network.neutron [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updating instance_info_cache with network_info: [{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:16:19 compute-0 nova_compute[189016]: 2026-02-18 15:16:19.493 189020 DEBUG oslo_concurrency.lockutils [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:16:19 compute-0 nova_compute[189016]: 2026-02-18 15:16:19.494 189020 DEBUG nova.compute.manager [req-c203f352-f11d-401c-90a3-6966e050c073 req-56f2dfde-609b-4021-9a13-4152bc70c90f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Received event network-vif-deleted-2dcd7fbf-cb55-4dab-878d-1256e048fe07 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:16:22 compute-0 nova_compute[189016]: 2026-02-18 15:16:22.648 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:23 compute-0 ovn_controller[99062]: 2026-02-18T15:16:23Z|00098|binding|INFO|Releasing lport b78157d5-9744-4676-ba4f-c8dc9c568cc2 from this chassis (sb_readonly=0)
Feb 18 15:16:23 compute-0 ovn_controller[99062]: 2026-02-18T15:16:23Z|00099|binding|INFO|Releasing lport 61d27581-bc51-4cfa-981d-b25d24632870 from this chassis (sb_readonly=0)
Feb 18 15:16:23 compute-0 nova_compute[189016]: 2026-02-18 15:16:23.238 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:23 compute-0 nova_compute[189016]: 2026-02-18 15:16:23.332 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:27 compute-0 nova_compute[189016]: 2026-02-18 15:16:27.651 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:28 compute-0 nova_compute[189016]: 2026-02-18 15:16:28.289 189020 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771427773.2883182, df0877c8-0a3d-46d8-aa05-76d180a75938 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:16:28 compute-0 nova_compute[189016]: 2026-02-18 15:16:28.290 189020 INFO nova.compute.manager [-] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] VM Stopped (Lifecycle Event)#033[00m
Feb 18 15:16:28 compute-0 nova_compute[189016]: 2026-02-18 15:16:28.324 189020 DEBUG nova.compute.manager [None req-860397fb-b8dc-43d6-b7d5-20d69a1967f8 - - - - - -] [instance: df0877c8-0a3d-46d8-aa05-76d180a75938] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:16:28 compute-0 nova_compute[189016]: 2026-02-18 15:16:28.335 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:29 compute-0 podman[204930]: time="2026-02-18T15:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:16:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 18 15:16:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4840 "" "Go-http-client/1.1"
Feb 18 15:16:31 compute-0 openstack_network_exporter[208107]: ERROR   15:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:16:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:16:31 compute-0 openstack_network_exporter[208107]: ERROR   15:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:16:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:16:31 compute-0 podman[253191]: 2026-02-18 15:16:31.791722097 +0000 UTC m=+0.091639394 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:16:31 compute-0 podman[253192]: 2026-02-18 15:16:31.804528637 +0000 UTC m=+0.106580077 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, version=9.7, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347)
Feb 18 15:16:32 compute-0 nova_compute[189016]: 2026-02-18 15:16:32.653 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:33 compute-0 nova_compute[189016]: 2026-02-18 15:16:33.339 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:37 compute-0 nova_compute[189016]: 2026-02-18 15:16:37.656 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:37 compute-0 podman[253235]: 2026-02-18 15:16:37.898267366 +0000 UTC m=+0.225016410 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:16:37 compute-0 podman[253237]: 2026-02-18 15:16:37.937425445 +0000 UTC m=+0.251304047 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, container_name=kepler, distribution-scope=public, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0)
Feb 18 15:16:37 compute-0 podman[253236]: 2026-02-18 15:16:37.95959292 +0000 UTC m=+0.281296128 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 18 15:16:38 compute-0 nova_compute[189016]: 2026-02-18 15:16:38.343 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:41.461 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:16:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:41.465 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:16:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:41.468 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:16:42 compute-0 nova_compute[189016]: 2026-02-18 15:16:42.659 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:42 compute-0 nova_compute[189016]: 2026-02-18 15:16:42.704 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:43 compute-0 ovn_controller[99062]: 2026-02-18T15:16:43Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:24:0f 10.100.0.8
Feb 18 15:16:43 compute-0 ovn_controller[99062]: 2026-02-18T15:16:43Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:24:0f 10.100.0.8
Feb 18 15:16:43 compute-0 nova_compute[189016]: 2026-02-18 15:16:43.346 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:45 compute-0 podman[253319]: 2026-02-18 15:16:45.748489755 +0000 UTC m=+0.067732596 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 15:16:45 compute-0 podman[253318]: 2026-02-18 15:16:45.762405053 +0000 UTC m=+0.082739181 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 18 15:16:47 compute-0 ovn_controller[99062]: 2026-02-18T15:16:47Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:08:4b 10.100.0.5
Feb 18 15:16:47 compute-0 ovn_controller[99062]: 2026-02-18T15:16:47Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:08:4b 10.100.0.5
Feb 18 15:16:47 compute-0 nova_compute[189016]: 2026-02-18 15:16:47.670 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:48 compute-0 nova_compute[189016]: 2026-02-18 15:16:48.349 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:49.298 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:16:49 compute-0 nova_compute[189016]: 2026-02-18 15:16:49.300 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:49.302 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:16:49 compute-0 podman[253359]: 2026-02-18 15:16:49.77776963 +0000 UTC m=+0.108382962 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 18 15:16:52 compute-0 nova_compute[189016]: 2026-02-18 15:16:52.669 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:53 compute-0 nova_compute[189016]: 2026-02-18 15:16:53.352 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:54 compute-0 nova_compute[189016]: 2026-02-18 15:16:54.553 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:55 compute-0 nova_compute[189016]: 2026-02-18 15:16:55.046 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:56 compute-0 nova_compute[189016]: 2026-02-18 15:16:56.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:56 compute-0 nova_compute[189016]: 2026-02-18 15:16:56.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:16:56 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:16:56.309 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:16:57 compute-0 nova_compute[189016]: 2026-02-18 15:16:57.670 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:58 compute-0 nova_compute[189016]: 2026-02-18 15:16:58.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:58 compute-0 nova_compute[189016]: 2026-02-18 15:16:58.355 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:16:59 compute-0 nova_compute[189016]: 2026-02-18 15:16:59.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:16:59 compute-0 nova_compute[189016]: 2026-02-18 15:16:59.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:16:59 compute-0 nova_compute[189016]: 2026-02-18 15:16:59.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:16:59 compute-0 nova_compute[189016]: 2026-02-18 15:16:59.680 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:16:59 compute-0 nova_compute[189016]: 2026-02-18 15:16:59.681 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:16:59 compute-0 nova_compute[189016]: 2026-02-18 15:16:59.681 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:16:59 compute-0 nova_compute[189016]: 2026-02-18 15:16:59.681 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:16:59 compute-0 podman[204930]: time="2026-02-18T15:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:16:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 18 15:16:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4840 "" "Go-http-client/1.1"
Feb 18 15:17:00 compute-0 nova_compute[189016]: 2026-02-18 15:17:00.373 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:01 compute-0 openstack_network_exporter[208107]: ERROR   15:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:17:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:17:01 compute-0 openstack_network_exporter[208107]: ERROR   15:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:17:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:17:02 compute-0 nova_compute[189016]: 2026-02-18 15:17:02.105 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updating instance_info_cache with network_info: [{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:17:02 compute-0 nova_compute[189016]: 2026-02-18 15:17:02.140 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:17:02 compute-0 nova_compute[189016]: 2026-02-18 15:17:02.140 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:17:02 compute-0 nova_compute[189016]: 2026-02-18 15:17:02.141 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:02 compute-0 nova_compute[189016]: 2026-02-18 15:17:02.674 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:02 compute-0 podman[253386]: 2026-02-18 15:17:02.774886329 +0000 UTC m=+0.086770832 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, version=9.7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9/ubi-minimal)
Feb 18 15:17:02 compute-0 podman[253385]: 2026-02-18 15:17:02.792048138 +0000 UTC m=+0.110554916 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 18 15:17:03 compute-0 nova_compute[189016]: 2026-02-18 15:17:03.358 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:03 compute-0 nova_compute[189016]: 2026-02-18 15:17:03.748 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:04 compute-0 nova_compute[189016]: 2026-02-18 15:17:04.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:05 compute-0 nova_compute[189016]: 2026-02-18 15:17:05.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:05 compute-0 nova_compute[189016]: 2026-02-18 15:17:05.176 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.133 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.134 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.134 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.134 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.279 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.366 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.368 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.431 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.441 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.518 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.519 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:06 compute-0 nova_compute[189016]: 2026-02-18 15:17:06.585 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.066 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.068 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4969MB free_disk=72.14999008178711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.068 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.068 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.630 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 0914ee8e-421d-4e49-958e-4e659b7fdc22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.631 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 538e968b-7f01-4e6b-af67-182df12fedec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.631 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.631 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.677 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.843 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:17:07 compute-0 nova_compute[189016]: 2026-02-18 15:17:07.875 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:17:08 compute-0 nova_compute[189016]: 2026-02-18 15:17:08.125 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:17:08 compute-0 nova_compute[189016]: 2026-02-18 15:17:08.126 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:08 compute-0 nova_compute[189016]: 2026-02-18 15:17:08.361 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:08 compute-0 podman[253444]: 2026-02-18 15:17:08.788238496 +0000 UTC m=+0.112656689 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 18 15:17:08 compute-0 podman[253443]: 2026-02-18 15:17:08.7895949 +0000 UTC m=+0.116887255 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 18 15:17:08 compute-0 podman[253445]: 2026-02-18 15:17:08.821333824 +0000 UTC m=+0.097503280 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Feb 18 15:17:10 compute-0 nova_compute[189016]: 2026-02-18 15:17:10.175 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:10 compute-0 nova_compute[189016]: 2026-02-18 15:17:10.710 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:12 compute-0 nova_compute[189016]: 2026-02-18 15:17:12.681 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:13 compute-0 nova_compute[189016]: 2026-02-18 15:17:13.364 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:16 compute-0 podman[253499]: 2026-02-18 15:17:16.765420456 +0000 UTC m=+0.078528446 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 18 15:17:16 compute-0 podman[253500]: 2026-02-18 15:17:16.785322674 +0000 UTC m=+0.095485390 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 18 15:17:17 compute-0 nova_compute[189016]: 2026-02-18 15:17:17.126 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:17 compute-0 nova_compute[189016]: 2026-02-18 15:17:17.684 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:18 compute-0 nova_compute[189016]: 2026-02-18 15:17:18.254 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:18 compute-0 nova_compute[189016]: 2026-02-18 15:17:18.367 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:20 compute-0 podman[253542]: 2026-02-18 15:17:20.792336451 +0000 UTC m=+0.117605233 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 18 15:17:21 compute-0 nova_compute[189016]: 2026-02-18 15:17:21.127 189020 DEBUG nova.objects.instance [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lazy-loading 'flavor' on Instance uuid 538e968b-7f01-4e6b-af67-182df12fedec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:21 compute-0 nova_compute[189016]: 2026-02-18 15:17:21.176 189020 DEBUG oslo_concurrency.lockutils [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:17:21 compute-0 nova_compute[189016]: 2026-02-18 15:17:21.176 189020 DEBUG oslo_concurrency.lockutils [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquired lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:17:22 compute-0 nova_compute[189016]: 2026-02-18 15:17:22.687 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:23 compute-0 nova_compute[189016]: 2026-02-18 15:17:23.371 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:24 compute-0 nova_compute[189016]: 2026-02-18 15:17:24.340 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:24 compute-0 nova_compute[189016]: 2026-02-18 15:17:24.341 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:24 compute-0 nova_compute[189016]: 2026-02-18 15:17:24.342 189020 INFO nova.compute.manager [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Rebooting instance#033[00m
Feb 18 15:17:24 compute-0 nova_compute[189016]: 2026-02-18 15:17:24.357 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:17:24 compute-0 nova_compute[189016]: 2026-02-18 15:17:24.357 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquired lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:17:24 compute-0 nova_compute[189016]: 2026-02-18 15:17:24.358 189020 DEBUG nova.network.neutron [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.203 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.205 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.205 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.207 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.208 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.209 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.210 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1dca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.216 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 0914ee8e-421d-4e49-958e-4e659b7fdc22 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 18 15:17:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:25.218 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/0914ee8e-421d-4e49-958e-4e659b7fdc22 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fc9f399571f1151dad02e1ad7f2b10f5a5ac66aa5da5d4c981c78739a1fdba51" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 18 15:17:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:26.363 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1979 Content-Type: application/json Date: Wed, 18 Feb 2026 15:17:25 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ede6e29e-04a2-427b-a67f-17b2fe20a9f8 x-openstack-request-id: req-ede6e29e-04a2-427b-a67f-17b2fe20a9f8 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 18 15:17:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:26.364 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "0914ee8e-421d-4e49-958e-4e659b7fdc22", "name": "tempest-ServerActionsTestJSON-server-81098757", "status": "HARD_REBOOT", "tenant_id": "f278181458244cb4836c191782a17069", "user_id": "d7476a1b8c814ab687793dcb836094b1", "metadata": {}, "hostId": "486d6a3a2cc9ac09d315b66cb01168e8ea7aee7519b7844389ed0aa2", "image": {"id": "3b4a4a6a-1650-453f-ba10-3bb16d71641c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/3b4a4a6a-1650-453f-ba10-3bb16d71641c"}]}, "flavor": {"id": "1682e27b-a40b-4634-9ba2-5b28d38a8558", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/1682e27b-a40b-4634-9ba2-5b28d38a8558"}]}, "created": "2026-02-18T15:15:51Z", "updated": "2026-02-18T15:17:24Z", "addresses": {"tempest-ServerActionsTestJSON-561283655-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:34:24:0f"}, {"version": 4, "addr": "192.168.122.216", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:34:24:0f"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/0914ee8e-421d-4e49-958e-4e659b7fdc22"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/0914ee8e-421d-4e49-958e-4e659b7fdc22"}], "OS-DCF:diskConfig": "MANUAL", "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-238254365", "OS-SRV-USG:launched_at": "2026-02-18T15:16:07.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1031124630"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "rebooting_hard", "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 18 15:17:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:26.364 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/0914ee8e-421d-4e49-958e-4e659b7fdc22 used request id req-ede6e29e-04a2-427b-a67f-17b2fe20a9f8 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 18 15:17:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:26.366 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0914ee8e-421d-4e49-958e-4e659b7fdc22', 'name': 'tempest-ServerActionsTestJSON-server-81098757', 'flavor': {'id': '1682e27b-a40b-4634-9ba2-5b28d38a8558', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f278181458244cb4836c191782a17069', 'user_id': 'd7476a1b8c814ab687793dcb836094b1', 'hostId': '486d6a3a2cc9ac09d315b66cb01168e8ea7aee7519b7844389ed0aa2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:17:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:26.369 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 538e968b-7f01-4e6b-af67-182df12fedec from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Feb 18 15:17:26 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:26.370 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/538e968b-7f01-4e6b-af67-182df12fedec -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fc9f399571f1151dad02e1ad7f2b10f5a5ac66aa5da5d4c981c78739a1fdba51" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Feb 18 15:17:26 compute-0 nova_compute[189016]: 2026-02-18 15:17:26.737 189020 DEBUG nova.network.neutron [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.076 189020 DEBUG nova.network.neutron [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updating instance_info_cache with network_info: [{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.213 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Releasing lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.216 189020 DEBUG nova.compute.manager [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:17:27 compute-0 kernel: tap7514447c-f6 (unregistering): left promiscuous mode
Feb 18 15:17:27 compute-0 NetworkManager[57258]: <info>  [1771427847.5506] device (tap7514447c-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.562 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 ovn_controller[99062]: 2026-02-18T15:17:27Z|00100|binding|INFO|Releasing lport 7514447c-f6a7-4670-817a-906ea6344789 from this chassis (sb_readonly=0)
Feb 18 15:17:27 compute-0 ovn_controller[99062]: 2026-02-18T15:17:27Z|00101|binding|INFO|Setting lport 7514447c-f6a7-4670-817a-906ea6344789 down in Southbound
Feb 18 15:17:27 compute-0 ovn_controller[99062]: 2026-02-18T15:17:27Z|00102|binding|INFO|Removing iface tap7514447c-f6 ovn-installed in OVS
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.566 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.570 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 18 15:17:27 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 43.452s CPU time.
Feb 18 15:17:27 compute-0 systemd-machined[158361]: Machine qemu-7-instance-00000007 terminated.
Feb 18 15:17:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:27.623 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:24:0f 10.100.0.8'], port_security=['fa:16:3e:34:24:0f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0914ee8e-421d-4e49-958e-4e659b7fdc22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f278181458244cb4836c191782a17069', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa8824c1-6371-4fe5-a8cf-d0fe69d37717', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8327e11-8865-4094-ab67-65cb3aafaf73, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=7514447c-f6a7-4670-817a-906ea6344789) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:17:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:27.626 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 7514447c-f6a7-4670-817a-906ea6344789 in datapath 7799c1f7-b42b-4f46-a4f0-f189be986a35 unbound from our chassis#033[00m
Feb 18 15:17:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:27.628 108400 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7799c1f7-b42b-4f46-a4f0-f189be986a35, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 18 15:17:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:27.633 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[2195120c-4507-4892-baee-b6084b134e31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:27.634 108400 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35 namespace which is not needed anymore#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.689 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.739 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.746 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.780 189020 INFO nova.virt.libvirt.driver [-] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Instance destroyed successfully.#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.782 189020 DEBUG nova.objects.instance [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lazy-loading 'resources' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.842 189020 DEBUG nova.virt.libvirt.vif [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-81098757',display_name='tempest-ServerActionsTestJSON-server-81098757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-81098757',id=7,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJDCE2bGTc240mnDuFCQwo775RFg/uhDQhaw4PpU6AWyV4V2QOoLXLLRSspISvkSI6OMyoSPyUS6UNqASsDq57h8869z4QVr7NNF9GBrNesxXzdFiJ1rUKlUNDgICNGQQ==',key_name='tempest-keypair-238254365',keypairs=<?>,launch_index=0,launched_at=2026-02-18T15:16:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f278181458244cb4836c191782a17069',ramdisk_id='',reservation_id='r-0nnj0owd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-620119161',owner_user_name='tempest-ServerActionsTestJSON-620119161-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T15:17:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7476a1b8c814ab687793dcb836094b1',uuid=0914ee8e-421d-4e49-958e-4e659b7fdc22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.843 189020 DEBUG nova.network.os_vif_util [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converting VIF {"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.844 189020 DEBUG nova.network.os_vif_util [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.845 189020 DEBUG os_vif [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.848 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.849 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7514447c-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.851 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.855 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:17:27 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[252762]: [NOTICE]   (252767) : haproxy version is 2.8.14-c23fe91
Feb 18 15:17:27 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[252762]: [NOTICE]   (252767) : path to executable is /usr/sbin/haproxy
Feb 18 15:17:27 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[252762]: [WARNING]  (252767) : Exiting Master process...
Feb 18 15:17:27 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[252762]: [ALERT]    (252767) : Current worker (252769) exited with code 143 (Terminated)
Feb 18 15:17:27 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[252762]: [WARNING]  (252767) : All workers exited. Exiting... (0)
Feb 18 15:17:27 compute-0 systemd[1]: libpod-192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af.scope: Deactivated successfully.
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.863 189020 INFO os_vif [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6')#033[00m
Feb 18 15:17:27 compute-0 podman[253610]: 2026-02-18 15:17:27.86836943 +0000 UTC m=+0.081653624 container died 192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.875 189020 DEBUG nova.virt.libvirt.driver [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Start _get_guest_xml network_info=[{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.883 189020 WARNING nova.virt.libvirt.driver [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.895 189020 DEBUG nova.virt.libvirt.host [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.898 189020 DEBUG nova.virt.libvirt.host [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.905 189020 DEBUG nova.virt.libvirt.host [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.906 189020 DEBUG nova.virt.libvirt.host [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.907 189020 DEBUG nova.virt.libvirt.driver [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.907 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T15:14:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1682e27b-a40b-4634-9ba2-5b28d38a8558',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.908 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.908 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.909 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.909 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.910 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.910 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.911 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.911 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.911 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.912 189020 DEBUG nova.virt.hardware [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.912 189020 DEBUG nova.objects.instance [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af-userdata-shm.mount: Deactivated successfully.
Feb 18 15:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3ddd6f17e1c0fa84715e9a004b58efc5965158d9dcbebaf2d6808f0e0b1527e-merged.mount: Deactivated successfully.
Feb 18 15:17:27 compute-0 podman[253610]: 2026-02-18 15:17:27.938029632 +0000 UTC m=+0.151313826 container cleanup 192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 18 15:17:27 compute-0 nova_compute[189016]: 2026-02-18 15:17:27.945 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:27 compute-0 systemd[1]: libpod-conmon-192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af.scope: Deactivated successfully.
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.023 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.config --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.024 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Acquiring lock "/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.025 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.026 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.027 189020 DEBUG nova.virt.libvirt.vif [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-81098757',display_name='tempest-ServerActionsTestJSON-server-81098757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-81098757',id=7,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJDCE2bGTc240mnDuFCQwo775RFg/uhDQhaw4PpU6AWyV4V2QOoLXLLRSspISvkSI6OMyoSPyUS6UNqASsDq57h8869z4QVr7NNF9GBrNesxXzdFiJ1rUKlUNDgICNGQQ==',key_name='tempest-keypair-238254365',keypairs=<?>,launch_index=0,launched_at=2026-02-18T15:16:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f278181458244cb4836c191782a17069',ramdisk_id='',reservation_id='r-0nnj0owd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-620119161',owner_user_name='tempest-ServerActionsTestJSON-620119161-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-18T15:17:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7476a1b8c814ab687793dcb836094b1',uuid=0914ee8e-421d-4e49-958e-4e659b7fdc22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.028 189020 DEBUG nova.network.os_vif_util [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converting VIF {"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.029 189020 DEBUG nova.network.os_vif_util [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.030 189020 DEBUG nova.objects.instance [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.046 189020 DEBUG nova.virt.libvirt.driver [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <uuid>0914ee8e-421d-4e49-958e-4e659b7fdc22</uuid>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <name>instance-00000007</name>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <memory>131072</memory>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <nova:name>tempest-ServerActionsTestJSON-server-81098757</nova:name>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:17:27</nova:creationTime>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <nova:flavor name="m1.nano">
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:memory>128</nova:memory>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:ephemeral>0</nova:ephemeral>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:user uuid="d7476a1b8c814ab687793dcb836094b1">tempest-ServerActionsTestJSON-620119161-project-member</nova:user>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:project uuid="f278181458244cb4836c191782a17069">tempest-ServerActionsTestJSON-620119161</nova:project>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="3b4a4a6a-1650-453f-ba10-3bb16d71641c"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        <nova:port uuid="7514447c-f6a7-4670-817a-906ea6344789">
Feb 18 15:17:28 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <system>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <entry name="serial">0914ee8e-421d-4e49-958e-4e659b7fdc22</entry>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <entry name="uuid">0914ee8e-421d-4e49-958e-4e659b7fdc22</entry>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </system>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <os>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  </os>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <features>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  </features>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.config"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:34:24:0f"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <target dev="tap7514447c-f6"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </interface>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/console.log" append="off"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <video>
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </video>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <input type="keyboard" bus="usb"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:17:28 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:17:28 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:17:28 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:17:28 compute-0 nova_compute[189016]: </domain>
Feb 18 15:17:28 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.049 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:28 compute-0 podman[253640]: 2026-02-18 15:17:28.051668955 +0000 UTC m=+0.083090179 container remove 192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.058 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[2acc93f9-41a4-4322-bf18-1363cfe88a3c]: (4, ('Wed Feb 18 03:17:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35 (192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af)\n192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af\nWed Feb 18 03:17:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35 (192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af)\n192116fa46edf8668da6c40461c861bfe60e5fa410499199132b4d5dc7f651af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.060 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0c31bf-a37d-4d50-92c9-87ddddc270ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.062 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7799c1f7-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:28 compute-0 kernel: tap7799c1f7-b0: left promiscuous mode
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.070 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.073 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.079 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[68d320b7-ac1f-464c-9cdb-0572c4775aba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.095 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[8867b423-eb7a-4b0c-8a2a-3b81c5354a2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.099 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[766d5454-d982-4423-b321-52ac90df0bf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.123 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[a169cedb-bbb0-4c91-b6aa-68cc41620eb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486127, 'reachable_time': 17227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253658, 'error': None, 'target': 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d7799c1f7\x2db42b\x2d4f46\x2da4f0\x2df189be986a35.mount: Deactivated successfully.
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.131 108948 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.133 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2a2d21-6578-42a0-98ed-702ff5f61d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.138 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.139 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.192 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.194 189020 DEBUG nova.objects.instance [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.234 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.288 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.289 189020 DEBUG nova.virt.disk.api [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Checking if we can resize image /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.289 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.342 189020 DEBUG oslo_concurrency.processutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.343 189020 DEBUG nova.virt.disk.api [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Cannot resize image /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.344 189020 DEBUG nova.objects.instance [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lazy-loading 'migration_context' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.364 189020 DEBUG nova.virt.libvirt.vif [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-81098757',display_name='tempest-ServerActionsTestJSON-server-81098757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-81098757',id=7,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJDCE2bGTc240mnDuFCQwo775RFg/uhDQhaw4PpU6AWyV4V2QOoLXLLRSspISvkSI6OMyoSPyUS6UNqASsDq57h8869z4QVr7NNF9GBrNesxXzdFiJ1rUKlUNDgICNGQQ==',key_name='tempest-keypair-238254365',keypairs=<?>,launch_index=0,launched_at=2026-02-18T15:16:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='f278181458244cb4836c191782a17069',ramdisk_id='',reservation_id='r-0nnj0owd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-620119161',owner_user_name='tempest-ServerActionsTestJSON-620119161-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:17:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7476a1b8c814ab687793dcb836094b1',uuid=0914ee8e-421d-4e49-958e-4e659b7fdc22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.366 189020 DEBUG nova.network.os_vif_util [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converting VIF {"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.367 189020 DEBUG nova.network.os_vif_util [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.367 189020 DEBUG os_vif [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.368 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.369 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.370 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.375 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.375 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7514447c-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.376 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7514447c-f6, col_values=(('external_ids', {'iface-id': '7514447c-f6a7-4670-817a-906ea6344789', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:24:0f', 'vm-uuid': '0914ee8e-421d-4e49-958e-4e659b7fdc22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.378 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 ovn_controller[99062]: 2026-02-18T15:17:28Z|00103|memory|INFO|peak resident set size grew 50% in last 2348.3 seconds, from 16128 kB to 24268 kB
Feb 18 15:17:28 compute-0 ovn_controller[99062]: 2026-02-18T15:17:28Z|00104|memory|INFO|idl-cells-OVN_Southbound:9988 idl-cells-Open_vSwitch:699 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:342 lflow-cache-entries-cache-matches:285 lflow-cache-size-KB:1406 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:576 ofctrl_installed_flow_usage-KB:420 ofctrl_sb_flow_ref_usage-KB:220
Feb 18 15:17:28 compute-0 NetworkManager[57258]: <info>  [1771427848.3814] manager: (tap7514447c-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.384 189020 DEBUG nova.compute.manager [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Received event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.385 189020 DEBUG nova.compute.manager [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing instance network info cache due to event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.386 189020 DEBUG oslo_concurrency.lockutils [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.386 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.388 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.390 189020 INFO os_vif [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:24:0f,bridge_name='br-int',has_traffic_filtering=True,id=7514447c-f6a7-4670-817a-906ea6344789,network=Network(7799c1f7-b42b-4f46-a4f0-f189be986a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7514447c-f6')#033[00m
Feb 18 15:17:28 compute-0 kernel: tap7514447c-f6: entered promiscuous mode
Feb 18 15:17:28 compute-0 NetworkManager[57258]: <info>  [1771427848.5004] manager: (tap7514447c-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Feb 18 15:17:28 compute-0 systemd-udevd[253577]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:17:28 compute-0 ovn_controller[99062]: 2026-02-18T15:17:28Z|00105|binding|INFO|Claiming lport 7514447c-f6a7-4670-817a-906ea6344789 for this chassis.
Feb 18 15:17:28 compute-0 ovn_controller[99062]: 2026-02-18T15:17:28Z|00106|binding|INFO|7514447c-f6a7-4670-817a-906ea6344789: Claiming fa:16:3e:34:24:0f 10.100.0.8
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.507 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 ovn_controller[99062]: 2026-02-18T15:17:28Z|00107|binding|INFO|Setting lport 7514447c-f6a7-4670-817a-906ea6344789 ovn-installed in OVS
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.511 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 NetworkManager[57258]: <info>  [1771427848.5173] device (tap7514447c-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 15:17:28 compute-0 NetworkManager[57258]: <info>  [1771427848.5193] device (tap7514447c-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 15:17:28 compute-0 ovn_controller[99062]: 2026-02-18T15:17:28Z|00108|binding|INFO|Setting lport 7514447c-f6a7-4670-817a-906ea6344789 up in Southbound
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.518 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.520 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:24:0f 10.100.0.8'], port_security=['fa:16:3e:34:24:0f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0914ee8e-421d-4e49-958e-4e659b7fdc22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f278181458244cb4836c191782a17069', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa8824c1-6371-4fe5-a8cf-d0fe69d37717', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8327e11-8865-4094-ab67-65cb3aafaf73, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=7514447c-f6a7-4670-817a-906ea6344789) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.521 108400 INFO neutron.agent.ovn.metadata.agent [-] Port 7514447c-f6a7-4670-817a-906ea6344789 in datapath 7799c1f7-b42b-4f46-a4f0-f189be986a35 bound to our chassis#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.524 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7799c1f7-b42b-4f46-a4f0-f189be986a35#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.532 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[66ffb2f1-7cff-4e53-8af8-814a66d0262c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.533 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7799c1f7-b1 in ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.535 242262 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7799c1f7-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.535 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[68b2d659-6123-4471-8513-ff3ad51009d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.536 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bdf758-e754-4c4c-8f1b-8210996d9070]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.547 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[ff73fd86-ff45-4b19-9c04-03020fa5af21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 systemd-machined[158361]: New machine qemu-10-instance-00000007.
Feb 18 15:17:28 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000007.
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.572 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[351f1d1a-043e-42dd-8666-cb46b1abb8bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.617 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[0639d7a8-3e36-4b58-a9dc-2851587448ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.624 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5f8cfe-97d4-4f3e-b894-8275c544c539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 NetworkManager[57258]: <info>  [1771427848.6267] manager: (tap7799c1f7-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.657 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[58f9ef59-3f75-442e-b250-a5901e07cbcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.662 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[371d18aa-663b-45de-b068-05faba171794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 NetworkManager[57258]: <info>  [1771427848.6836] device (tap7799c1f7-b0): carrier: link connected
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.691 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[362dd9c7-3781-41a1-b39d-a836a0872e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.707 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[8091ddf9-96d9-44ef-94d4-def13fdbaccd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7799c1f7-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c2:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494613, 'reachable_time': 20779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253717, 'error': None, 'target': 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.729 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b01b1ad3-5667-4a11-b4eb-917ed8e6b9df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:c2a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494613, 'tstamp': 494613}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253719, 'error': None, 'target': 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.749 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[e57200cd-4191-43ba-a2a1-b340b95ed428]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7799c1f7-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:c2:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494613, 'reachable_time': 20779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253720, 'error': None, 'target': 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.767 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1995 Content-Type: application/json Date: Wed, 18 Feb 2026 15:17:26 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b3253952-089e-4106-8f6f-3888fd2e8c6b x-openstack-request-id: req-b3253952-089e-4106-8f6f-3888fd2e8c6b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.768 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "538e968b-7f01-4e6b-af67-182df12fedec", "name": "tempest-AttachInterfacesUnderV243Test-server-1891672357", "status": "ACTIVE", "tenant_id": "e70a93fe3e61494488f1032883dfa661", "user_id": "5092e33fb89a453bb8e6853648498f94", "metadata": {}, "hostId": "a89954fd0778a37bb099359824c97bf8be3d0ee26a2da404ba757d4b", "image": {"id": "3b4a4a6a-1650-453f-ba10-3bb16d71641c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/3b4a4a6a-1650-453f-ba10-3bb16d71641c"}]}, "flavor": {"id": "1682e27b-a40b-4634-9ba2-5b28d38a8558", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/1682e27b-a40b-4634-9ba2-5b28d38a8558"}]}, "created": "2026-02-18T15:15:56Z", "updated": "2026-02-18T15:16:09Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-836147848-network": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:94:08:4b"}, {"version": 4, "addr": "192.168.122.231", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:94:08:4b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/538e968b-7f01-4e6b-af67-182df12fedec"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/538e968b-7f01-4e6b-af67-182df12fedec"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-475234563", "OS-SRV-USG:launched_at": "2026-02-18T15:16:09.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1681541876"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.768 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/538e968b-7f01-4e6b-af67-182df12fedec used request id req-b3253952-089e-4106-8f6f-3888fd2e8c6b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.770 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '538e968b-7f01-4e6b-af67-182df12fedec', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1891672357', 'flavor': {'id': '1682e27b-a40b-4634-9ba2-5b28d38a8558', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e70a93fe3e61494488f1032883dfa661', 'user_id': '5092e33fb89a453bb8e6853648498f94', 'hostId': 'a89954fd0778a37bb099359824c97bf8be3d0ee26a2da404ba757d4b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.771 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.771 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.772 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.772 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:28 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:28.775 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T15:17:28.772371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.784 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[7b803af4-3e9d-4da6-a618-6d16ffeea3b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.853 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[d6363af3-ff46-440c-bd02-0c16b7ca8106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.855 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7799c1f7-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.855 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.856 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7799c1f7-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.859 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 kernel: tap7799c1f7-b0: entered promiscuous mode
Feb 18 15:17:28 compute-0 NetworkManager[57258]: <info>  [1771427848.8601] manager: (tap7799c1f7-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.863 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7799c1f7-b0, col_values=(('external_ids', {'iface-id': 'b78157d5-9744-4676-ba4f-c8dc9c568cc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:28 compute-0 ovn_controller[99062]: 2026-02-18T15:17:28Z|00109|binding|INFO|Releasing lport b78157d5-9744-4676-ba4f-c8dc9c568cc2 from this chassis (sb_readonly=0)
Feb 18 15:17:28 compute-0 nova_compute[189016]: 2026-02-18 15:17:28.871 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.872 108400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7799c1f7-b42b-4f46-a4f0-f189be986a35.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7799c1f7-b42b-4f46-a4f0-f189be986a35.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.875 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[b12372d9-3157-4398-a8e1-f330efba902b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.876 108400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: global
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    log         /dev/log local0 debug
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    log-tag     haproxy-metadata-proxy-7799c1f7-b42b-4f46-a4f0-f189be986a35
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    user        root
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    group       root
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    maxconn     1024
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    pidfile     /var/lib/neutron/external/pids/7799c1f7-b42b-4f46-a4f0-f189be986a35.pid.haproxy
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    daemon
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: defaults
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    log global
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    mode http
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    option httplog
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    option dontlognull
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    option http-server-close
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    option forwardfor
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    retries                 3
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    timeout http-request    30s
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    timeout connect         30s
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    timeout client          32s
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    timeout server          32s
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    timeout http-keep-alive 30s
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: listen listener
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    bind 169.254.169.254:80
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    server metadata /var/lib/neutron/metadata_proxy
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]:    http-request add-header X-OVN-Network-ID 7799c1f7-b42b-4f46-a4f0-f189be986a35
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 18 15:17:28 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:28.877 108400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'env', 'PROCESS_TAG=haproxy-7799c1f7-b42b-4f46-a4f0-f189be986a35', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7799c1f7-b42b-4f46-a4f0-f189be986a35.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 18 15:17:29 compute-0 podman[253750]: 2026-02-18 15:17:29.333022659 +0000 UTC m=+0.092507135 container create f0040b8f7b734d88ddb651953432c73db82227dced1f1857ad32cd5efa9059fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:17:29 compute-0 podman[253750]: 2026-02-18 15:17:29.278613588 +0000 UTC m=+0.038098094 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 15:17:29 compute-0 systemd[1]: Started libpod-conmon-f0040b8f7b734d88ddb651953432c73db82227dced1f1857ad32cd5efa9059fb.scope.
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.390 189020 DEBUG nova.virt.libvirt.host [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Removed pending event for 0914ee8e-421d-4e49-958e-4e659b7fdc22 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.392 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427849.3895602, 0914ee8e-421d-4e49-958e-4e659b7fdc22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.392 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.397 189020 DEBUG nova.compute.manager [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.405 189020 INFO nova.virt.libvirt.driver [-] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Instance rebooted successfully.#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.406 189020 DEBUG nova.compute.manager [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:17:29 compute-0 systemd[1]: Started libcrun container.
Feb 18 15:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b8d99a5c9f1e191768c16380d25c4e3ae949990603b9b1f95f9fc68cb5e737/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 15:17:29 compute-0 podman[253750]: 2026-02-18 15:17:29.437889143 +0000 UTC m=+0.197373649 container init f0040b8f7b734d88ddb651953432c73db82227dced1f1857ad32cd5efa9059fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 18 15:17:29 compute-0 podman[253750]: 2026-02-18 15:17:29.444871087 +0000 UTC m=+0.204355563 container start f0040b8f7b734d88ddb651953432c73db82227dced1f1857ad32cd5efa9059fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.461 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.467 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.473 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.475 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[253773]: [NOTICE]   (253777) : New worker (253779) forked
Feb 18 15:17:29 compute-0 neutron-haproxy-ovnmeta-7799c1f7-b42b-4f46-a4f0-f189be986a35[253773]: [NOTICE]   (253777) : Loading success.
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.492 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.493 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427849.39571, 0914ee8e-421d-4e49-958e-4e659b7fdc22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.493 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] VM Started (Lifecycle Event)#033[00m
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.520 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.latency volume: 1081895999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.521 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.latency volume: 84237973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.522 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.523 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.bytes volume: 30177792 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.523 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.523 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.524 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T15:17:29.522503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.524 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.525 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.525 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.525 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T15:17:29.525872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.526 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.526 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.527 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.527 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.528 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.528 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.529 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.529 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.529 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.530 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T15:17:29.530167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.545 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.546 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.553 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.558 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.561 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.562 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 nova_compute[189016]: 2026-02-18 15:17:29.562 189020 DEBUG oslo_concurrency.lockutils [None req-2f2a4dc8-9057-44ef-bb37-4a2abe58c8ee d7476a1b8c814ab687793dcb836094b1 f278181458244cb4836c191782a17069 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.563 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.564 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.564 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.565 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.565 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.566 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.566 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T15:17:29.566144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.567 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.568 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.568 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.569 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.570 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.570 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.570 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.571 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.571 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T15:17:29.571628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.572 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.572 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.573 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.bytes volume: 72978432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.573 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.574 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.574 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.575 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.575 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.576 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.576 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T15:17:29.576539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.581 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 0914ee8e-421d-4e49-958e-4e659b7fdc22 / tap7514447c-f6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.582 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.585 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 538e968b-7f01-4e6b-af67-182df12fedec / tap00cd0913-0a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.596 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.600 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.600 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.600 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.600 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.601 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.601 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T15:17:29.601075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.626 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/cpu volume: 190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.665 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/cpu volume: 36960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.666 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.666 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.667 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.667 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.667 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.668 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.668 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.668 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T15:17:29.668252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.669 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.670 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.latency volume: 4322577684 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.670 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.671 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.671 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.672 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.672 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.673 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.673 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.674 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T15:17:29.673525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.674 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.675 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.675 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.675 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.676 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.676 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.676 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T15:17:29.676847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.677 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.678 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.680 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.681 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.682 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.683 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.683 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.683 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.684 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T15:17:29.683853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.685 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.686 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.686 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.687 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.687 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.688 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.688 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.688 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.689 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.689 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T15:17:29.689341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.690 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.691 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.691 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.691 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.692 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.692 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.693 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T15:17:29.693187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.694 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.694 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.695 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.695 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.695 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.696 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.696 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T15:17:29.696317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.697 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.698 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.698 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.699 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.699 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.699 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.700 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.700 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.701 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.701 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.702 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T15:17:29.700272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.702 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.703 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.703 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.704 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T15:17:29.704138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.705 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.705 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.706 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.706 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.706 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.707 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.707 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T15:17:29.707249) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.708 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.709 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.709 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.710 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.710 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.711 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.711 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-02-18T15:17:29.711461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.711 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.712 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-81098757>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1891672357>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-81098757>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1891672357>]
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.713 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.713 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.714 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.714 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.715 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T15:17:29.715023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.715 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.716 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.716 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.717 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.717 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.718 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.718 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.719 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T15:17:29.719022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.719 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.720 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 0914ee8e-421d-4e49-958e-4e659b7fdc22: ceilometer.compute.pollsters.NoVolumeException
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.720 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/memory.usage volume: 42.609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.721 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.721 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.722 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.722 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.722 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.723 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.723 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T15:17:29.723217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.723 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.724 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.725 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.725 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.726 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.726 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.727 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.727 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.727 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.728 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T15:17:29.728217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.728 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.728 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.729 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.729 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.729 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.730 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.730 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.730 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.731 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T15:17:29.730698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.731 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.731 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.732 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.732 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.732 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.732 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.733 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.733 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T15:17:29.733112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.733 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.734 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.734 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.735 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.735 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.735 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.736 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.736 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-02-18T15:17:29.736115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.736 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-81098757>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1891672357>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-81098757>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-1891672357>]
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.737 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.737 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.738 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.738 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.738 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.739 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.739 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.739 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.740 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.740 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.740 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.741 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.742 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.743 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.743 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.745 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.746 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.746 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.746 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.747 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 podman[204930]: time="2026-02-18T15:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.748 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:17:29.748 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:17:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 18 15:17:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4842 "" "Go-http-client/1.1"
Feb 18 15:17:31 compute-0 openstack_network_exporter[208107]: ERROR   15:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:17:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:17:31 compute-0 openstack_network_exporter[208107]: ERROR   15:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:17:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:17:32 compute-0 nova_compute[189016]: 2026-02-18 15:17:32.692 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:33 compute-0 nova_compute[189016]: 2026-02-18 15:17:33.379 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:33 compute-0 podman[253791]: 2026-02-18 15:17:33.756489926 +0000 UTC m=+0.078999668 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:17:33 compute-0 podman[253792]: 2026-02-18 15:17:33.787626305 +0000 UTC m=+0.108962737 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.7, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9/ubi-minimal)
Feb 18 15:17:33 compute-0 nova_compute[189016]: 2026-02-18 15:17:33.871 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:33 compute-0 nova_compute[189016]: 2026-02-18 15:17:33.911 189020 DEBUG nova.network.neutron [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updating instance_info_cache with network_info: [{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:17:34 compute-0 nova_compute[189016]: 2026-02-18 15:17:34.824 189020 DEBUG oslo_concurrency.lockutils [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Releasing lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:17:34 compute-0 nova_compute[189016]: 2026-02-18 15:17:34.825 189020 DEBUG nova.compute.manager [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 18 15:17:34 compute-0 nova_compute[189016]: 2026-02-18 15:17:34.827 189020 DEBUG nova.compute.manager [None req-dc30c91d-927c-47b2-a1d5-74460a10e349 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] network_info to inject: |[{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 18 15:17:34 compute-0 nova_compute[189016]: 2026-02-18 15:17:34.830 189020 DEBUG oslo_concurrency.lockutils [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:17:34 compute-0 nova_compute[189016]: 2026-02-18 15:17:34.831 189020 DEBUG nova.network.neutron [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:17:37 compute-0 nova_compute[189016]: 2026-02-18 15:17:37.138 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:37 compute-0 nova_compute[189016]: 2026-02-18 15:17:37.695 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:38 compute-0 nova_compute[189016]: 2026-02-18 15:17:38.382 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:39 compute-0 podman[253842]: 2026-02-18 15:17:39.768113158 +0000 UTC m=+0.081238383 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 18 15:17:39 compute-0 podman[253841]: 2026-02-18 15:17:39.786674572 +0000 UTC m=+0.108263719 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 18 15:17:39 compute-0 podman[253843]: 2026-02-18 15:17:39.790757685 +0000 UTC m=+0.105264705 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, name=ubi9, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, config_id=kepler, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Feb 18 15:17:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:41.462 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:41.463 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:41.464 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:41 compute-0 nova_compute[189016]: 2026-02-18 15:17:41.524 189020 DEBUG nova.compute.manager [req-02ea00e9-78af-4e7e-aae1-1678ad873907 req-0849a75a-5393-4b56-a77f-634d883e0648 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-vif-unplugged-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:17:41 compute-0 nova_compute[189016]: 2026-02-18 15:17:41.526 189020 DEBUG oslo_concurrency.lockutils [req-02ea00e9-78af-4e7e-aae1-1678ad873907 req-0849a75a-5393-4b56-a77f-634d883e0648 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:41 compute-0 nova_compute[189016]: 2026-02-18 15:17:41.527 189020 DEBUG oslo_concurrency.lockutils [req-02ea00e9-78af-4e7e-aae1-1678ad873907 req-0849a75a-5393-4b56-a77f-634d883e0648 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:41 compute-0 nova_compute[189016]: 2026-02-18 15:17:41.527 189020 DEBUG oslo_concurrency.lockutils [req-02ea00e9-78af-4e7e-aae1-1678ad873907 req-0849a75a-5393-4b56-a77f-634d883e0648 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:41 compute-0 nova_compute[189016]: 2026-02-18 15:17:41.527 189020 DEBUG nova.compute.manager [req-02ea00e9-78af-4e7e-aae1-1678ad873907 req-0849a75a-5393-4b56-a77f-634d883e0648 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] No waiting events found dispatching network-vif-unplugged-7514447c-f6a7-4670-817a-906ea6344789 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:17:41 compute-0 nova_compute[189016]: 2026-02-18 15:17:41.528 189020 WARNING nova.compute.manager [req-02ea00e9-78af-4e7e-aae1-1678ad873907 req-0849a75a-5393-4b56-a77f-634d883e0648 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received unexpected event network-vif-unplugged-7514447c-f6a7-4670-817a-906ea6344789 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:17:42 compute-0 nova_compute[189016]: 2026-02-18 15:17:42.703 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:43 compute-0 nova_compute[189016]: 2026-02-18 15:17:43.385 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:43 compute-0 nova_compute[189016]: 2026-02-18 15:17:43.624 189020 DEBUG nova.network.neutron [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updated VIF entry in instance network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:17:43 compute-0 nova_compute[189016]: 2026-02-18 15:17:43.625 189020 DEBUG nova.network.neutron [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updating instance_info_cache with network_info: [{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.129 189020 DEBUG oslo_concurrency.lockutils [req-ba47a847-830e-4506-864f-ce98e72a8486 req-c6d5fe72-62e4-4c8f-a91a-66345a55ad2a af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.922 189020 DEBUG nova.compute.manager [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.923 189020 DEBUG oslo_concurrency.lockutils [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.924 189020 DEBUG oslo_concurrency.lockutils [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.924 189020 DEBUG oslo_concurrency.lockutils [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.925 189020 DEBUG nova.compute.manager [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] No waiting events found dispatching network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.925 189020 WARNING nova.compute.manager [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received unexpected event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.926 189020 DEBUG nova.compute.manager [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.926 189020 DEBUG oslo_concurrency.lockutils [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.927 189020 DEBUG oslo_concurrency.lockutils [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.927 189020 DEBUG oslo_concurrency.lockutils [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.928 189020 DEBUG nova.compute.manager [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] No waiting events found dispatching network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:17:44 compute-0 nova_compute[189016]: 2026-02-18 15:17:44.928 189020 WARNING nova.compute.manager [req-cd4b48f0-5ab3-49f9-9eea-f5d6e994a71b req-2422ccf4-d224-44c9-b082-aea0b036751f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received unexpected event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:17:45 compute-0 ovn_controller[99062]: 2026-02-18T15:17:45Z|00110|binding|INFO|Releasing lport b78157d5-9744-4676-ba4f-c8dc9c568cc2 from this chassis (sb_readonly=0)
Feb 18 15:17:45 compute-0 ovn_controller[99062]: 2026-02-18T15:17:45Z|00111|binding|INFO|Releasing lport 61d27581-bc51-4cfa-981d-b25d24632870 from this chassis (sb_readonly=0)
Feb 18 15:17:45 compute-0 nova_compute[189016]: 2026-02-18 15:17:45.271 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:46 compute-0 nova_compute[189016]: 2026-02-18 15:17:46.642 189020 DEBUG nova.objects.instance [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Lazy-loading 'flavor' on Instance uuid 538e968b-7f01-4e6b-af67-182df12fedec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:46 compute-0 nova_compute[189016]: 2026-02-18 15:17:46.924 189020 DEBUG oslo_concurrency.lockutils [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquiring lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:17:46 compute-0 nova_compute[189016]: 2026-02-18 15:17:46.925 189020 DEBUG oslo_concurrency.lockutils [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Acquired lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:17:47 compute-0 nova_compute[189016]: 2026-02-18 15:17:47.710 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:47 compute-0 podman[253895]: 2026-02-18 15:17:47.794120704 +0000 UTC m=+0.087389447 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:17:47 compute-0 nova_compute[189016]: 2026-02-18 15:17:47.828 189020 DEBUG nova.compute.manager [req-9f8c4143-c35f-4774-9fea-a39d34329fd9 req-d5cada8f-4e78-4bec-a7e8-ca5f30a5ef25 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:17:47 compute-0 nova_compute[189016]: 2026-02-18 15:17:47.828 189020 DEBUG oslo_concurrency.lockutils [req-9f8c4143-c35f-4774-9fea-a39d34329fd9 req-d5cada8f-4e78-4bec-a7e8-ca5f30a5ef25 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:47 compute-0 nova_compute[189016]: 2026-02-18 15:17:47.829 189020 DEBUG oslo_concurrency.lockutils [req-9f8c4143-c35f-4774-9fea-a39d34329fd9 req-d5cada8f-4e78-4bec-a7e8-ca5f30a5ef25 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:47 compute-0 nova_compute[189016]: 2026-02-18 15:17:47.829 189020 DEBUG oslo_concurrency.lockutils [req-9f8c4143-c35f-4774-9fea-a39d34329fd9 req-d5cada8f-4e78-4bec-a7e8-ca5f30a5ef25 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "0914ee8e-421d-4e49-958e-4e659b7fdc22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:47 compute-0 nova_compute[189016]: 2026-02-18 15:17:47.829 189020 DEBUG nova.compute.manager [req-9f8c4143-c35f-4774-9fea-a39d34329fd9 req-d5cada8f-4e78-4bec-a7e8-ca5f30a5ef25 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] No waiting events found dispatching network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:17:47 compute-0 nova_compute[189016]: 2026-02-18 15:17:47.829 189020 WARNING nova.compute.manager [req-9f8c4143-c35f-4774-9fea-a39d34329fd9 req-d5cada8f-4e78-4bec-a7e8-ca5f30a5ef25 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Received unexpected event network-vif-plugged-7514447c-f6a7-4670-817a-906ea6344789 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:17:47 compute-0 podman[253894]: 2026-02-18 15:17:47.832384741 +0000 UTC m=+0.141737196 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260216, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Feb 18 15:17:48 compute-0 nova_compute[189016]: 2026-02-18 15:17:48.389 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:48 compute-0 nova_compute[189016]: 2026-02-18 15:17:48.581 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquiring lock "2177a803-311a-47ef-8beb-465c67ce1bdc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:48 compute-0 nova_compute[189016]: 2026-02-18 15:17:48.582 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:48 compute-0 nova_compute[189016]: 2026-02-18 15:17:48.639 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 18 15:17:48 compute-0 nova_compute[189016]: 2026-02-18 15:17:48.865 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:48 compute-0 nova_compute[189016]: 2026-02-18 15:17:48.866 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.013 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.014 189020 INFO nova.compute.claims [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.386 189020 DEBUG nova.compute.provider_tree [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.412 189020 DEBUG nova.scheduler.client.report [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.459 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.461 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.546 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.547 189020 DEBUG nova.network.neutron [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.569 189020 INFO nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.589 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 18 15:17:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:49.830 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:17:49 compute-0 nova_compute[189016]: 2026-02-18 15:17:49.833 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:49 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:49.835 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.075 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.077 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.078 189020 INFO nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Creating image(s)#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.079 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquiring lock "/var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.079 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "/var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.080 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "/var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.101 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.159 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.161 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquiring lock "7b8f481705a33c6196332050fe7f03324002d985" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.163 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.177 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.235 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.237 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.281 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985,backing_fmt=raw /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.283 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "7b8f481705a33c6196332050fe7f03324002d985" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.283 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.346 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7b8f481705a33c6196332050fe7f03324002d985 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.348 189020 DEBUG nova.virt.disk.api [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Checking if we can resize image /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.348 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.418 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.420 189020 DEBUG nova.virt.disk.api [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Cannot resize image /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.421 189020 DEBUG nova.objects.instance [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lazy-loading 'migration_context' on Instance uuid 2177a803-311a-47ef-8beb-465c67ce1bdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.438 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.440 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Ensure instance console log exists: /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.441 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.441 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.442 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.648 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:50 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:17:50.839 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:17:50 compute-0 nova_compute[189016]: 2026-02-18 15:17:50.904 189020 DEBUG nova.policy [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '959521a7f37a426ba0b225e559599f65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72f850c210f143e1a4fcb26746366722', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 18 15:17:51 compute-0 nova_compute[189016]: 2026-02-18 15:17:51.232 189020 DEBUG nova.network.neutron [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:17:51 compute-0 nova_compute[189016]: 2026-02-18 15:17:51.656 189020 DEBUG nova.compute.manager [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Received event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:17:51 compute-0 nova_compute[189016]: 2026-02-18 15:17:51.657 189020 DEBUG nova.compute.manager [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing instance network info cache due to event network-changed-00cd0913-0a46-457d-9b30-4007ec209a54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:17:51 compute-0 nova_compute[189016]: 2026-02-18 15:17:51.658 189020 DEBUG oslo_concurrency.lockutils [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:17:51 compute-0 podman[253955]: 2026-02-18 15:17:51.799758218 +0000 UTC m=+0.120694010 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 18 15:17:52 compute-0 nova_compute[189016]: 2026-02-18 15:17:52.715 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:53 compute-0 nova_compute[189016]: 2026-02-18 15:17:53.393 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:53 compute-0 nova_compute[189016]: 2026-02-18 15:17:53.402 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:57 compute-0 nova_compute[189016]: 2026-02-18 15:17:57.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:57 compute-0 nova_compute[189016]: 2026-02-18 15:17:57.723 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:58 compute-0 nova_compute[189016]: 2026-02-18 15:17:58.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:17:58 compute-0 nova_compute[189016]: 2026-02-18 15:17:58.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:17:58 compute-0 nova_compute[189016]: 2026-02-18 15:17:58.397 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:17:59 compute-0 podman[204930]: time="2026-02-18T15:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:17:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 18 15:17:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4848 "" "Go-http-client/1.1"
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:17:59 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:17:59 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:17:59 compute-0 nova_compute[189016]: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.053 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.055 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:18:00 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:00 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:17:59.806 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:00 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:00 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task Traceback (most recent call last):
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     task(self, context)
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9891, in _heal_instance_info_cache
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     inst = objects.Instance.get_by_uuid(
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     result = self.transport._send(
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task     raise result
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578,
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task #033[00m
Feb 18 15:18:00 compute-0 nova_compute[189016]: 2026-02-18 15:18:00.094 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:00 compute-0 rsyslogd[239561]: message too long (8833) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:00 compute-0 rsyslogd[239561]: message too long (8897) with configured size 8096, begin of message is: 2026-02-18 15:18:00.084 189020 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:01 compute-0 nova_compute[189016]: 2026-02-18 15:18:01.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:01 compute-0 openstack_network_exporter[208107]: ERROR   15:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:18:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:18:01 compute-0 openstack_network_exporter[208107]: ERROR   15:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:18:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:18:02 compute-0 nova_compute[189016]: 2026-02-18 15:18:02.727 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:03 compute-0 nova_compute[189016]: 2026-02-18 15:18:03.668 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:04 compute-0 nova_compute[189016]: 2026-02-18 15:18:04.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:04 compute-0 podman[253995]: 2026-02-18 15:18:04.776697166 +0000 UTC m=+0.084336003 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:18:04 compute-0 podman[253996]: 2026-02-18 15:18:04.786668718 +0000 UTC m=+0.094636784 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, release=1770267347, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 18 15:18:05 compute-0 nova_compute[189016]: 2026-02-18 15:18:05.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:05 compute-0 ovn_controller[99062]: 2026-02-18T15:18:05Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:24:0f 10.100.0.8
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.054 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:06 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:06 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task Traceback (most recent call last):
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     task(self, context)
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     result = self.transport._send(
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task     raise result
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at:
Feb 18 15:18:06 compute-0 nova_compute[189016]: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task #033[00m
Feb 18 15:18:06 compute-0 rsyslogd[239561]: message too long (8132) with configured size 8096, begin of message is: 2026-02-18 15:18:06.115 189020 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:07 compute-0 nova_compute[189016]: 2026-02-18 15:18:07.729 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:08 compute-0 nova_compute[189016]: 2026-02-18 15:18:08.672 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:09 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:09 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:18:09 compute-0 nova_compute[189016]: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:18:10 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:10 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:18:09.795 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:10 compute-0 podman[254038]: 2026-02-18 15:18:10.799804298 +0000 UTC m=+0.127344751 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:18:10 compute-0 podman[254039]: 2026-02-18 15:18:10.812211482 +0000 UTC m=+0.134675746 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:18:10 compute-0 podman[254040]: 2026-02-18 15:18:10.819598279 +0000 UTC m=+0.084452587 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=kepler, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, release-0.7.12=)
Feb 18 15:18:12 compute-0 nova_compute[189016]: 2026-02-18 15:18:12.732 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:13 compute-0 nova_compute[189016]: 2026-02-18 15:18:13.673 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:17 compute-0 nova_compute[189016]: 2026-02-18 15:18:17.114 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:17 compute-0 nova_compute[189016]: 2026-02-18 15:18:17.735 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:18 compute-0 nova_compute[189016]: 2026-02-18 15:18:18.676 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:18 compute-0 podman[254097]: 2026-02-18 15:18:18.764543863 +0000 UTC m=+0.087153954 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 18 15:18:18 compute-0 podman[254096]: 2026-02-18 15:18:18.791401892 +0000 UTC m=+0.111887200 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644)
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:19 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:19 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:18:19 compute-0 nova_compute[189016]: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:18:20 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:20 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:18:19.796 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:22 compute-0 nova_compute[189016]: 2026-02-18 15:18:22.740 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:22 compute-0 podman[254138]: 2026-02-18 15:18:22.831158181 +0000 UTC m=+0.143425677 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Feb 18 15:18:23 compute-0 nova_compute[189016]: 2026-02-18 15:18:23.680 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:23 compute-0 ovn_controller[99062]: 2026-02-18T15:18:23Z|00112|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb 18 15:18:27 compute-0 nova_compute[189016]: 2026-02-18 15:18:27.742 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:28 compute-0 nova_compute[189016]: 2026-02-18 15:18:28.683 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:29 compute-0 podman[204930]: time="2026-02-18T15:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:18:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 18 15:18:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4844 "" "Go-http-client/1.1"
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:29 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:29 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:18:29 compute-0 nova_compute[189016]: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:18:30 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:30 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:18:29.830 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:31 compute-0 openstack_network_exporter[208107]: ERROR   15:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:18:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:18:31 compute-0 openstack_network_exporter[208107]: ERROR   15:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:18:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:18:32 compute-0 nova_compute[189016]: 2026-02-18 15:18:32.745 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:33 compute-0 nova_compute[189016]: 2026-02-18 15:18:33.686 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:35 compute-0 podman[254164]: 2026-02-18 15:18:35.747723614 +0000 UTC m=+0.066634626 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.openshift.tags=minimal rhel9, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 18 15:18:35 compute-0 podman[254163]: 2026-02-18 15:18:35.74837672 +0000 UTC m=+0.069303043 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:18:37 compute-0 nova_compute[189016]: 2026-02-18 15:18:37.747 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:38 compute-0 nova_compute[189016]: 2026-02-18 15:18:38.691 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:39 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:39 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:18:39 compute-0 nova_compute[189016]: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:18:40 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:40 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:18:39.799 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:18:41.466 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:18:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:18:41.471 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:18:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:18:41.474 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:18:41 compute-0 podman[254206]: 2026-02-18 15:18:41.757290798 +0000 UTC m=+0.080590299 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 18 15:18:41 compute-0 podman[254207]: 2026-02-18 15:18:41.761185456 +0000 UTC m=+0.076706861 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 18 15:18:41 compute-0 podman[254208]: 2026-02-18 15:18:41.777345025 +0000 UTC m=+0.093017833 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_id=kepler)
Feb 18 15:18:42 compute-0 nova_compute[189016]: 2026-02-18 15:18:42.752 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:43 compute-0 nova_compute[189016]: 2026-02-18 15:18:43.695 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:47 compute-0 nova_compute[189016]: 2026-02-18 15:18:47.755 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:48 compute-0 nova_compute[189016]: 2026-02-18 15:18:48.699 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:49 compute-0 podman[254260]: 2026-02-18 15:18:49.760508934 +0000 UTC m=+0.070974405 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:18:49 compute-0 podman[254259]: 2026-02-18 15:18:49.769338127 +0000 UTC m=+0.082470896 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260216, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, tcib_managed=true)
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:49 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:49 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:18:49 compute-0 nova_compute[189016]: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:18:50 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:50 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:18:49.808 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:18:52 compute-0 nova_compute[189016]: 2026-02-18 15:18:52.761 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:53 compute-0 nova_compute[189016]: 2026-02-18 15:18:53.702 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:53 compute-0 podman[254301]: 2026-02-18 15:18:53.820436582 +0000 UTC m=+0.117232465 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:18:57 compute-0 nova_compute[189016]: 2026-02-18 15:18:57.763 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:58 compute-0 nova_compute[189016]: 2026-02-18 15:18:58.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:58 compute-0 nova_compute[189016]: 2026-02-18 15:18:58.052 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:18:58 compute-0 nova_compute[189016]: 2026-02-18 15:18:58.706 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.046 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:18:59 compute-0 podman[204930]: time="2026-02-18T15:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:18:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30472 "" "Go-http-client/1.1"
Feb 18 15:18:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4843 "" "Go-http-client/1.1"
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:59 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:59 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:18:59 compute-0 nova_compute[189016]: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.050 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:19:00 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:00 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:18:59.801 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:00 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:00 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n'
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task Traceback (most recent call last):
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     task(self, context)
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9863, in _heal_instance_info_cache
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     db_instances = objects.InstanceList.get_by_host(
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     result = self.transport._send(
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task     raise result
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py"
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task #033[00m
Feb 18 15:19:00 compute-0 nova_compute[189016]: 2026-02-18 15:19:00.099 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:00 compute-0 rsyslogd[239561]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:00 compute-0 rsyslogd[239561]: message too long (8622) with configured size 8096, begin of message is: 2026-02-18 15:19:00.097 189020 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:01 compute-0 openstack_network_exporter[208107]: ERROR   15:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:19:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:19:01 compute-0 openstack_network_exporter[208107]: ERROR   15:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:19:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:19:02 compute-0 nova_compute[189016]: 2026-02-18 15:19:02.767 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:03 compute-0 nova_compute[189016]: 2026-02-18 15:19:03.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:03 compute-0 nova_compute[189016]: 2026-02-18 15:19:03.710 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:05 compute-0 nova_compute[189016]: 2026-02-18 15:19:05.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.045 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Error during ComputeManager._sync_scheduler_instance_info: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:06 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:06 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n'
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task Traceback (most recent call last):
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     task(self, context)
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2236, in _sync_scheduler_instance_info
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_host(context, self.host,
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     result = self.transport._send(
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task     raise result
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py"
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task #033[00m
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.150 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:06 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:06 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task Traceback (most recent call last):
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     task(self, context)
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     result = self.transport._send(
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task     raise result
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at:
Feb 18 15:19:06 compute-0 nova_compute[189016]: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task #033[00m
Feb 18 15:19:06 compute-0 rsyslogd[239561]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:06 compute-0 rsyslogd[239561]: message too long (8622) with configured size 8096, begin of message is: 2026-02-18 15:19:06.148 189020 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:06 compute-0 rsyslogd[239561]: message too long (8132) with configured size 8096, begin of message is: 2026-02-18 15:19:06.193 189020 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:06 compute-0 podman[254326]: 2026-02-18 15:19:06.761247999 +0000 UTC m=+0.075460059 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 18 15:19:06 compute-0 podman[254327]: 2026-02-18 15:19:06.766001539 +0000 UTC m=+0.081817650 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, release=1770267347, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 18 15:19:07 compute-0 nova_compute[189016]: 2026-02-18 15:19:07.095 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:07 compute-0 nova_compute[189016]: 2026-02-18 15:19:07.769 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:08 compute-0 nova_compute[189016]: 2026-02-18 15:19:08.714 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:09 compute-0 nova_compute[189016]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:09 compute-0 nova_compute[189016]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db     raise result
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Feb 18 15:19:09 compute-0 nova_compute[189016]: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db #033[00m
Feb 18 15:19:10 compute-0 rsyslogd[239561]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:10 compute-0 rsyslogd[239561]: message too long (9052) with configured size 8096, begin of message is: 2026-02-18 15:19:09.792 189020 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 18 15:19:12 compute-0 podman[254370]: 2026-02-18 15:19:12.755672559 +0000 UTC m=+0.081615595 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 18 15:19:12 compute-0 podman[254369]: 2026-02-18 15:19:12.765509498 +0000 UTC m=+0.093822714 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:19:12 compute-0 podman[254371]: 2026-02-18 15:19:12.768735749 +0000 UTC m=+0.085055121 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, io.buildah.version=1.29.0, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Feb 18 15:19:12 compute-0 nova_compute[189016]: 2026-02-18 15:19:12.773 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:13 compute-0 nova_compute[189016]: 2026-02-18 15:19:13.717 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:17 compute-0 nova_compute[189016]: 2026-02-18 15:19:17.777 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:18 compute-0 nova_compute[189016]: 2026-02-18 15:19:18.721 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:19 compute-0 nova_compute[189016]: 2026-02-18 15:19:19.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:19 compute-0 nova_compute[189016]: 2026-02-18 15:19:19.810 189020 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.#033[00m
Feb 18 15:19:20 compute-0 podman[254426]: 2026-02-18 15:19:20.76014363 +0000 UTC m=+0.070246878 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:19:20 compute-0 podman[254425]: 2026-02-18 15:19:20.78311188 +0000 UTC m=+0.095000693 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260216, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 18 15:19:22 compute-0 nova_compute[189016]: 2026-02-18 15:19:22.780 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:22 compute-0 nova_compute[189016]: 2026-02-18 15:19:22.980 189020 DEBUG nova.network.neutron [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Successfully created port: ca0eba72-b2eb-451a-8aba-98bb05ea6e44 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.208 189020 DEBUG nova.network.neutron [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updating instance_info_cache with network_info: [{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.235 189020 DEBUG oslo_concurrency.lockutils [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] Releasing lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.236 189020 DEBUG nova.compute.manager [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.236 189020 DEBUG nova.compute.manager [None req-954ebd7c-75fb-4f30-ba85-c2b86d0fcb3a 5092e33fb89a453bb8e6853648498f94 e70a93fe3e61494488f1032883dfa661 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] network_info to inject: |[{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.242 189020 DEBUG oslo_concurrency.lockutils [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.242 189020 DEBUG nova.network.neutron [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Refreshing network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:19:23 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:23.492 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '9a:bc:a6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:81:9f:51:00:b1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.496 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:23 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:23.497 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 18 15:19:23 compute-0 nova_compute[189016]: 2026-02-18 15:19:23.723 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.004 189020 DEBUG nova.network.neutron [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Successfully updated port: ca0eba72-b2eb-451a-8aba-98bb05ea6e44 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.037 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquiring lock "refresh_cache-2177a803-311a-47ef-8beb-465c67ce1bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.037 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquired lock "refresh_cache-2177a803-311a-47ef-8beb-465c67ce1bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.038 189020 DEBUG nova.network.neutron [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.230 189020 DEBUG nova.compute.manager [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Received event network-changed-ca0eba72-b2eb-451a-8aba-98bb05ea6e44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.231 189020 DEBUG nova.compute.manager [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Refreshing instance network info cache due to event network-changed-ca0eba72-b2eb-451a-8aba-98bb05ea6e44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.231 189020 DEBUG oslo_concurrency.lockutils [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "refresh_cache-2177a803-311a-47ef-8beb-465c67ce1bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.263 189020 DEBUG nova.network.neutron [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.646 189020 DEBUG nova.network.neutron [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updated VIF entry in instance network info cache for port 00cd0913-0a46-457d-9b30-4007ec209a54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.647 189020 DEBUG nova.network.neutron [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 538e968b-7f01-4e6b-af67-182df12fedec] Updating instance_info_cache with network_info: [{"id": "00cd0913-0a46-457d-9b30-4007ec209a54", "address": "fa:16:3e:94:08:4b", "network": {"id": "aa0dcae9-fb1d-4854-8c2f-bb40797fee0c", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-836147848-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e70a93fe3e61494488f1032883dfa661", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00cd0913-0a", "ovs_interfaceid": "00cd0913-0a46-457d-9b30-4007ec209a54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.674 189020 DEBUG oslo_concurrency.lockutils [req-ddb050c3-1100-4e5e-92d5-569f2d7916b6 req-de482c12-ac86-4915-bba9-38013f65020f af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-538e968b-7f01-4e6b-af67-182df12fedec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:19:24 compute-0 podman[254467]: 2026-02-18 15:19:24.775370209 +0000 UTC m=+0.103282803 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 18 15:19:24 compute-0 nova_compute[189016]: 2026-02-18 15:19:24.899 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.206 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.210 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.211 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f78deee07d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.214 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.214 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.214 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.215 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.216 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.216 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.216 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.216 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.216 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.216 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee2360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.217 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.218 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.218 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.218 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.219 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.219 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f78deee06e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f78dda1da90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.228 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0914ee8e-421d-4e49-958e-4e659b7fdc22', 'name': 'tempest-ServerActionsTestJSON-server-81098757', 'flavor': {'id': '1682e27b-a40b-4634-9ba2-5b28d38a8558', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'f278181458244cb4836c191782a17069', 'user_id': 'd7476a1b8c814ab687793dcb836094b1', 'hostId': '486d6a3a2cc9ac09d315b66cb01168e8ea7aee7519b7844389ed0aa2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.231 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '538e968b-7f01-4e6b-af67-182df12fedec', 'name': 'tempest-AttachInterfacesUnderV243Test-server-1891672357', 'flavor': {'id': '1682e27b-a40b-4634-9ba2-5b28d38a8558', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e70a93fe3e61494488f1032883dfa661', 'user_id': '5092e33fb89a453bb8e6853648498f94', 'hostId': 'a89954fd0778a37bb099359824c97bf8be3d0ee26a2da404ba757d4b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.232 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.232 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.232 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.233 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-02-18T15:19:25.232938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.277 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.latency volume: 1421108316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.279 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.latency volume: 83272498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.325 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.latency volume: 1081895999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.325 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.latency volume: 84237973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.327 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.327 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f78deee0740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.327 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.327 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.327 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.327 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.328 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.bytes volume: 32065536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.328 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.328 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.bytes volume: 30177792 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.329 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-02-18T15:19:25.327806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.329 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.329 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f78deee0830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.330 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.330 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.330 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.330 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.330 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.requests volume: 1219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-02-18T15:19:25.330489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.331 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.331 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.requests volume: 1088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.331 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.332 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.332 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f78deee2ae0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.332 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.332 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.332 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee38c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.332 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-02-18T15:19:25.332685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.349 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.allocation volume: 31072256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.349 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.370 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.370 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.371 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.371 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f78deee0890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.371 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.371 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.372 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee08c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.372 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.372 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.usage volume: 30212096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.372 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.372 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.373 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.373 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.373 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f78deee08f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-02-18T15:19:25.372193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.373 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.374 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.374 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.374 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.374 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.bytes volume: 446464 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.374 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-02-18T15:19:25.374198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.375 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.375 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.bytes volume: 73089024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.375 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.375 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.376 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f78deee2390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.376 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.376 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.376 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2150>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.376 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-02-18T15:19:25.376525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.381 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.385 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.387 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.387 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f78e1127770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.388 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.388 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.388 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deffd160>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.388 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-02-18T15:19:25.388342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.387 189020 DEBUG nova.network.neutron [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Updating instance_info_cache with network_info: [{"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.411 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Releasing lock "refresh_cache-2177a803-311a-47ef-8beb-465c67ce1bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.412 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Instance network_info: |[{"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.412 189020 DEBUG oslo_concurrency.lockutils [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquired lock "refresh_cache-2177a803-311a-47ef-8beb-465c67ce1bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.412 189020 DEBUG nova.network.neutron [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Refreshing network info cache for port ca0eba72-b2eb-451a-8aba-98bb05ea6e44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.413 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/cpu volume: 36330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.416 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Start _get_guest_xml network_info=[{"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_name': '/dev/vda', 'image_id': '3b4a4a6a-1650-453f-ba10-3bb16d71641c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.426 189020 WARNING nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.436 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/cpu volume: 38150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.437 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.437 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f78deee0950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.437 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.437 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.438 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.438 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.438 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.latency volume: 145306112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.438 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.latency volume: 4352260537 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-02-18T15:19:25.438150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f78deee21b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.439 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2180>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.440 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-02-18T15:19:25.440043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.440 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.440 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.440 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.441 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f78deee0d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.441 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.441 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.441 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee21e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.441 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.441 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.bytes.delta volume: 4604 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.442 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.442 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.442 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f78deee09b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.443 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.443 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.443 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee09e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.443 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-02-18T15:19:25.441674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.443 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.requests volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-02-18T15:19:25.443440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.444 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.444 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.443 189020 DEBUG nova.virt.libvirt.host [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.444 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.444 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.445 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f78deee1010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.445 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.445 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.445 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1220>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.445 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.445 189020 DEBUG nova.virt.libvirt.host [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.445 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.445 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f78deee0a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-02-18T15:19:25.445367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-02-18T15:19:25.446476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.447 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.447 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f78deee1280>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.447 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.447 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.447 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78e064ea50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.447 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.448 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-02-18T15:19:25.447895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.448 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.448 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.448 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f78deee2240>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.448 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.449 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.449 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.449 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.449 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.bytes volume: 3796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.449 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.450 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-02-18T15:19:25.449139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.450 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f78deee0a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.450 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.450 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.450 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.450 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.451 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.451 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f78deee22d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.451 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.452 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.452 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee2300>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-02-18T15:19:25.450895) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.452 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.452 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.outgoing.bytes.delta volume: 3796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.452 189020 DEBUG nova.virt.libvirt.host [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.452 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-02-18T15:19:25.452479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f78deee2330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.453 189020 DEBUG nova.virt.libvirt.host [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f78deee23c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.453 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.454 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.454 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-02-18T15:19:25.454069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.454 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.454 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.454 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.454 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f78deee0cb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.454 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-18T15:14:46Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1682e27b-a40b-4634-9ba2-5b28d38a8558',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-18T15:14:47Z,direct_url=<?>,disk_format='qcow2',id=3b4a4a6a-1650-453f-ba10-3bb16d71641c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='71c6c5d63b07447388ace322f081ffc3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-18T15:14:48Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/memory.usage volume: 42.421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-02-18T15:19:25.455214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.455 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/memory.usage volume: 42.6015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.455 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.455 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f78deee2ab0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.456 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1d30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.456 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-02-18T15:19:25.456312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.456 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.456 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.457 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.457 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.457 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f78deee0d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.457 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.457 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.457 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.457 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee0d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.457 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.bytes volume: 4694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-02-18T15:19:25.457944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.458 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.bytes volume: 4343 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.458 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f78deee1520>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.458 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee1550>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.458 189020 DEBUG nova.virt.hardware [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-02-18T15:19:25.459071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f78deee20f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.459 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.460 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.460 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f78deee06b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.460 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-02-18T15:19:25.460136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.460 15 DEBUG ceilometer.compute.pollsters [-] 0914ee8e-421d-4e49-958e-4e659b7fdc22/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.461 15 DEBUG ceilometer.compute.pollsters [-] 538e968b-7f01-4e6b-af67-182df12fedec/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.462 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.462 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f78deee2420>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f78dee98290>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.463 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.464 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.465 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 ceilometer_agent_compute[198738]: 2026-02-18 15:19:25.466 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.469 189020 DEBUG nova.virt.libvirt.vif [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:17:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-450119003',display_name='tempest-TestServerBasicOps-server-450119003',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-450119003',id=10,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBETv6W8Ctf1v26QZYVjp4b+UJELWzqDIbVQQKcP/iDD3NwsRh+7zyNmFftoMcHueUqF7e1AzDcEbK3YMotUb7yIao3Tqu4q4bnJTsgaa6vUHZqroOotgOEB8sKyGjV4WRg==',key_name='tempest-TestServerBasicOps-484484429',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72f850c210f143e1a4fcb26746366722',ramdisk_id='',reservation_id='r-q4yvyshc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-193986866',owner_user_name='tempest-TestServerBasicOps-193986866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:17:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='959521a7f37a426ba0b225e559599f65',uuid=2177a803-311a-47ef-8beb-465c67ce1bdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.469 189020 DEBUG nova.network.os_vif_util [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Converting VIF {"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.470 189020 DEBUG nova.network.os_vif_util [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fb:1b,bridge_name='br-int',has_traffic_filtering=True,id=ca0eba72-b2eb-451a-8aba-98bb05ea6e44,network=Network(d75dadf4-b73e-4827-9a42-29672a662a19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca0eba72-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.472 189020 DEBUG nova.objects.instance [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2177a803-311a-47ef-8beb-465c67ce1bdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.493 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] End _get_guest_xml xml=<domain type="kvm">
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <uuid>2177a803-311a-47ef-8beb-465c67ce1bdc</uuid>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <name>instance-0000000a</name>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <memory>131072</memory>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <vcpu>1</vcpu>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <metadata>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <nova:name>tempest-TestServerBasicOps-server-450119003</nova:name>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <nova:creationTime>2026-02-18 15:19:25</nova:creationTime>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <nova:flavor name="m1.nano">
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:memory>128</nova:memory>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:disk>1</nova:disk>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:swap>0</nova:swap>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:ephemeral>0</nova:ephemeral>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:vcpus>1</nova:vcpus>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      </nova:flavor>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <nova:owner>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:user uuid="959521a7f37a426ba0b225e559599f65">tempest-TestServerBasicOps-193986866-project-member</nova:user>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:project uuid="72f850c210f143e1a4fcb26746366722">tempest-TestServerBasicOps-193986866</nova:project>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      </nova:owner>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <nova:root type="image" uuid="3b4a4a6a-1650-453f-ba10-3bb16d71641c"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <nova:ports>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        <nova:port uuid="ca0eba72-b2eb-451a-8aba-98bb05ea6e44">
Feb 18 15:19:25 compute-0 nova_compute[189016]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:        </nova:port>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      </nova:ports>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </nova:instance>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  </metadata>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <sysinfo type="smbios">
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <system>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <entry name="manufacturer">RDO</entry>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <entry name="product">OpenStack Compute</entry>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <entry name="serial">2177a803-311a-47ef-8beb-465c67ce1bdc</entry>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <entry name="uuid">2177a803-311a-47ef-8beb-465c67ce1bdc</entry>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <entry name="family">Virtual Machine</entry>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </system>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  </sysinfo>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <os>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <boot dev="hd"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <smbios mode="sysinfo"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  </os>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <features>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <acpi/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <apic/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <vmcoreinfo/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  </features>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <clock offset="utc">
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <timer name="pit" tickpolicy="delay"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <timer name="hpet" present="no"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  </clock>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <cpu mode="host-model" match="exact">
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <topology sockets="1" cores="1" threads="1"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  </cpu>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  <devices>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <disk type="file" device="disk">
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <driver name="qemu" type="qcow2" cache="none"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <target dev="vda" bus="virtio"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <disk type="file" device="cdrom">
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <driver name="qemu" type="raw" cache="none"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <source file="/var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk.config"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <target dev="sda" bus="sata"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </disk>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <interface type="ethernet">
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <mac address="fa:16:3e:c6:fb:1b"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <driver name="vhost" rx_queue_size="512"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <mtu size="1442"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <target dev="tapca0eba72-b2"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </interface>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <serial type="pty">
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <log file="/var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/console.log" append="off"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </serial>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <video>
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <model type="virtio"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </video>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <input type="tablet" bus="usb"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <rng model="virtio">
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <backend model="random">/dev/urandom</backend>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </rng>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="pci" model="pcie-root-port"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <controller type="usb" index="0"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    <memballoon model="virtio">
Feb 18 15:19:25 compute-0 nova_compute[189016]:      <stats period="10"/>
Feb 18 15:19:25 compute-0 nova_compute[189016]:    </memballoon>
Feb 18 15:19:25 compute-0 nova_compute[189016]:  </devices>
Feb 18 15:19:25 compute-0 nova_compute[189016]: </domain>
Feb 18 15:19:25 compute-0 nova_compute[189016]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.495 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Preparing to wait for external event network-vif-plugged-ca0eba72-b2eb-451a-8aba-98bb05ea6e44 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.495 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Acquiring lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.496 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.496 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.497 189020 DEBUG nova.virt.libvirt.vif [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-18T15:17:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-450119003',display_name='tempest-TestServerBasicOps-server-450119003',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-450119003',id=10,image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBETv6W8Ctf1v26QZYVjp4b+UJELWzqDIbVQQKcP/iDD3NwsRh+7zyNmFftoMcHueUqF7e1AzDcEbK3YMotUb7yIao3Tqu4q4bnJTsgaa6vUHZqroOotgOEB8sKyGjV4WRg==',key_name='tempest-TestServerBasicOps-484484429',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72f850c210f143e1a4fcb26746366722',ramdisk_id='',reservation_id='r-q4yvyshc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3b4a4a6a-1650-453f-ba10-3bb16d71641c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-193986866',owner_user_name='tempest-TestServerBasicOps-193986866-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-18T15:17:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='959521a7f37a426ba0b225e559599f65',uuid=2177a803-311a-47ef-8beb-465c67ce1bdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.497 189020 DEBUG nova.network.os_vif_util [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Converting VIF {"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.498 189020 DEBUG nova.network.os_vif_util [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fb:1b,bridge_name='br-int',has_traffic_filtering=True,id=ca0eba72-b2eb-451a-8aba-98bb05ea6e44,network=Network(d75dadf4-b73e-4827-9a42-29672a662a19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca0eba72-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.499 189020 DEBUG os_vif [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fb:1b,bridge_name='br-int',has_traffic_filtering=True,id=ca0eba72-b2eb-451a-8aba-98bb05ea6e44,network=Network(d75dadf4-b73e-4827-9a42-29672a662a19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca0eba72-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.500 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.501 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.501 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.516 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.517 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca0eba72-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.518 189020 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca0eba72-b2, col_values=(('external_ids', {'iface-id': 'ca0eba72-b2eb-451a-8aba-98bb05ea6e44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:fb:1b', 'vm-uuid': '2177a803-311a-47ef-8beb-465c67ce1bdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.521 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:25 compute-0 NetworkManager[57258]: <info>  [1771427965.5243] manager: (tapca0eba72-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.524 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.533 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.535 189020 INFO os_vif [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:fb:1b,bridge_name='br-int',has_traffic_filtering=True,id=ca0eba72-b2eb-451a-8aba-98bb05ea6e44,network=Network(d75dadf4-b73e-4827-9a42-29672a662a19),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca0eba72-b2')#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.608 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.609 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.609 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] No VIF found with MAC fa:16:3e:c6:fb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 18 15:19:25 compute-0 nova_compute[189016]: 2026-02-18 15:19:25.610 189020 INFO nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Using config drive#033[00m
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.399 189020 INFO nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Creating config drive at /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk.config#033[00m
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.406 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6lg6w807 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.541 189020 DEBUG oslo_concurrency.processutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6lg6w807" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:19:26 compute-0 kernel: tapca0eba72-b2: entered promiscuous mode
Feb 18 15:19:26 compute-0 NetworkManager[57258]: <info>  [1771427966.6160] manager: (tapca0eba72-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Feb 18 15:19:26 compute-0 ovn_controller[99062]: 2026-02-18T15:19:26Z|00113|binding|INFO|Claiming lport ca0eba72-b2eb-451a-8aba-98bb05ea6e44 for this chassis.
Feb 18 15:19:26 compute-0 ovn_controller[99062]: 2026-02-18T15:19:26Z|00114|binding|INFO|ca0eba72-b2eb-451a-8aba-98bb05ea6e44: Claiming fa:16:3e:c6:fb:1b 10.100.0.6
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.622 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:26 compute-0 ovn_controller[99062]: 2026-02-18T15:19:26Z|00115|binding|INFO|Setting lport ca0eba72-b2eb-451a-8aba-98bb05ea6e44 up in Southbound
Feb 18 15:19:26 compute-0 ovn_controller[99062]: 2026-02-18T15:19:26Z|00116|binding|INFO|Setting lport ca0eba72-b2eb-451a-8aba-98bb05ea6e44 ovn-installed in OVS
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.628 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.625 108400 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:fb:1b 10.100.0.6'], port_security=['fa:16:3e:c6:fb:1b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2177a803-311a-47ef-8beb-465c67ce1bdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d75dadf4-b73e-4827-9a42-29672a662a19', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72f850c210f143e1a4fcb26746366722', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc6e7381-7e71-4141-8430-597f33bcdcff f55a9781-cf2e-40a3-9289-f5916f9b0c92', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=adbe499e-4869-4936-b6ea-7970ab2adf8c, chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f039ac2ed90>], logical_port=ca0eba72-b2eb-451a-8aba-98bb05ea6e44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.630 108400 INFO neutron.agent.ovn.metadata.agent [-] Port ca0eba72-b2eb-451a-8aba-98bb05ea6e44 in datapath d75dadf4-b73e-4827-9a42-29672a662a19 bound to our chassis#033[00m
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.631 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.640 108400 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d75dadf4-b73e-4827-9a42-29672a662a19#033[00m
Feb 18 15:19:26 compute-0 systemd-machined[158361]: New machine qemu-11-instance-0000000a.
Feb 18 15:19:26 compute-0 systemd-udevd[254514]: Network interface NamePolicy= disabled on kernel command line.
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.666 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0ee6f6-d182-4510-ade0-62acbe549128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.669 108400 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd75dadf4-b1 in ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.675 242262 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd75dadf4-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.676 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[933a577e-8151-4858-99fd-74fd2b3550f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.678 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[f066b18e-bb46-4bd0-8dd8-ee3cd7c6c940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 NetworkManager[57258]: <info>  [1771427966.6808] device (tapca0eba72-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 18 15:19:26 compute-0 NetworkManager[57258]: <info>  [1771427966.6846] device (tapca0eba72-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.701 108948 DEBUG oslo.privsep.daemon [-] privsep: reply[ef959676-e0a6-4df8-a80f-2d977f1891a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.729 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[360cb2f3-d463-4ddd-9405-dec7aba53a94]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.777 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[6ead9c11-4888-4763-9dc9-8bc390023d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.789 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[a89d7b0c-8743-4796-9e25-8b3ba5e59da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 NetworkManager[57258]: <info>  [1771427966.7909] manager: (tapd75dadf4-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.827 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[11293ce8-afde-4586-86bd-533b649c8b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.831 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[69e5c400-8fa2-4f4c-9b0e-40856bdcda24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 NetworkManager[57258]: <info>  [1771427966.8621] device (tapd75dadf4-b0): carrier: link connected
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.869 242320 DEBUG oslo.privsep.daemon [-] privsep: reply[bed24f53-d78c-42f2-99e8-89010ba3b74a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.891 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[dd81e3a8-3b1d-4925-b618-9ef76335e191]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd75dadf4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:cf:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506431, 'reachable_time': 20410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254546, 'error': None, 'target': 'ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.912 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1c8379-f11f-4c28-a7c6-d995e526003e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:cfe0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506431, 'tstamp': 506431}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254547, 'error': None, 'target': 'ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.934 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5911cb-de62-42fe-8110-d7ef08548471]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd75dadf4-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:cf:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506431, 'reachable_time': 20410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254548, 'error': None, 'target': 'ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.963 189020 DEBUG nova.network.neutron [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Updated VIF entry in instance network info cache for port ca0eba72-b2eb-451a-8aba-98bb05ea6e44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.964 189020 DEBUG nova.network.neutron [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Updating instance_info_cache with network_info: [{"id": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "address": "fa:16:3e:c6:fb:1b", "network": {"id": "d75dadf4-b73e-4827-9a42-29672a662a19", "bridge": "br-int", "label": "tempest-TestServerBasicOps-675853718-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72f850c210f143e1a4fcb26746366722", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca0eba72-b2", "ovs_interfaceid": "ca0eba72-b2eb-451a-8aba-98bb05ea6e44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:19:26 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:26.982 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7c6522-e03c-498b-9df1-d77b141fe80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:26 compute-0 nova_compute[189016]: 2026-02-18 15:19:26.985 189020 DEBUG oslo_concurrency.lockutils [req-39bb6795-b185-46e6-8b4b-e8aaac54b1f8 req-29143f92-41aa-453e-b5e0-55e2c712d93e af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Releasing lock "refresh_cache-2177a803-311a-47ef-8beb-465c67ce1bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.091 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[87cfad9d-8c8b-48c7-b922-d682c2ffdffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.095 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd75dadf4-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.095 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.096 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd75dadf4-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.099 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:27 compute-0 kernel: tapd75dadf4-b0: entered promiscuous mode
Feb 18 15:19:27 compute-0 NetworkManager[57258]: <info>  [1771427967.1003] manager: (tapd75dadf4-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.105 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd75dadf4-b0, col_values=(('external_ids', {'iface-id': 'e69a88d3-f113-4354-a6a1-bcde87f290a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.107 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:27 compute-0 ovn_controller[99062]: 2026-02-18T15:19:27Z|00117|binding|INFO|Releasing lport e69a88d3-f113-4354-a6a1-bcde87f290a7 from this chassis (sb_readonly=0)
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.108 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.111 108400 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d75dadf4-b73e-4827-9a42-29672a662a19.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d75dadf4-b73e-4827-9a42-29672a662a19.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.114 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.113 242262 DEBUG oslo.privsep.daemon [-] privsep: reply[49913272-eeba-4e4f-bfd6-ea03e4e7ce4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.116 108400 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: global
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    log         /dev/log local0 debug
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    log-tag     haproxy-metadata-proxy-d75dadf4-b73e-4827-9a42-29672a662a19
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    user        root
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    group       root
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    maxconn     1024
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    pidfile     /var/lib/neutron/external/pids/d75dadf4-b73e-4827-9a42-29672a662a19.pid.haproxy
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    daemon
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: defaults
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    log global
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    mode http
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    option httplog
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    option dontlognull
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    option http-server-close
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    option forwardfor
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    retries                 3
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    timeout http-request    30s
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    timeout connect         30s
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    timeout client          32s
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    timeout server          32s
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    timeout http-keep-alive 30s
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: listen listener
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    bind 169.254.169.254:80
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    server metadata /var/lib/neutron/metadata_proxy
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]:    http-request add-header X-OVN-Network-ID d75dadf4-b73e-4827-9a42-29672a662a19
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 18 15:19:27 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:27.117 108400 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19', 'env', 'PROCESS_TAG=haproxy-d75dadf4-b73e-4827-9a42-29672a662a19', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d75dadf4-b73e-4827-9a42-29672a662a19.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.127 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427967.1262324, 2177a803-311a-47ef-8beb-465c67ce1bdc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.128 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] VM Started (Lifecycle Event)#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.154 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.162 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427967.1264431, 2177a803-311a-47ef-8beb-465c67ce1bdc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.162 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] VM Paused (Lifecycle Event)#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.184 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.193 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.217 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:19:27 compute-0 podman[254586]: 2026-02-18 15:19:27.641491753 +0000 UTC m=+0.083880242 container create 544f7c601c4449f8e824d466df31824d9fa8f7dd05c6fd93ce7bf6c6f7c86908 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 18 15:19:27 compute-0 podman[254586]: 2026-02-18 15:19:27.600295452 +0000 UTC m=+0.042683981 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 18 15:19:27 compute-0 systemd[1]: Started libpod-conmon-544f7c601c4449f8e824d466df31824d9fa8f7dd05c6fd93ce7bf6c6f7c86908.scope.
Feb 18 15:19:27 compute-0 systemd[1]: Started libcrun container.
Feb 18 15:19:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacf2023f3517d1968a937d5487b61ba8a494dc9fd7e03c6a8478ea5a4eed56a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 18 15:19:27 compute-0 podman[254586]: 2026-02-18 15:19:27.766161435 +0000 UTC m=+0.208549944 container init 544f7c601c4449f8e824d466df31824d9fa8f7dd05c6fd93ce7bf6c6f7c86908 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 18 15:19:27 compute-0 podman[254586]: 2026-02-18 15:19:27.774761053 +0000 UTC m=+0.217149562 container start 544f7c601c4449f8e824d466df31824d9fa8f7dd05c6fd93ce7bf6c6f7c86908 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 18 15:19:27 compute-0 nova_compute[189016]: 2026-02-18 15:19:27.783 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:27 compute-0 neutron-haproxy-ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19[254601]: [NOTICE]   (254605) : New worker (254607) forked
Feb 18 15:19:27 compute-0 neutron-haproxy-ovnmeta-d75dadf4-b73e-4827-9a42-29672a662a19[254601]: [NOTICE]   (254605) : Loading success.
Feb 18 15:19:29 compute-0 podman[204930]: time="2026-02-18T15:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:19:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 18 15:19:29 compute-0 podman[204930]: @ - - [18/Feb/2026:15:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5304 "" "Go-http-client/1.1"
Feb 18 15:19:30 compute-0 nova_compute[189016]: 2026-02-18 15:19:30.521 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:31 compute-0 openstack_network_exporter[208107]: ERROR   15:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:19:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:19:31 compute-0 openstack_network_exporter[208107]: ERROR   15:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:19:31 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.644 189020 DEBUG nova.compute.manager [req-8635c390-62cc-488e-a2a2-0fead086ca27 req-c58a334b-523f-4ccc-aba6-8abd6fc2a220 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Received event network-vif-plugged-ca0eba72-b2eb-451a-8aba-98bb05ea6e44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.645 189020 DEBUG oslo_concurrency.lockutils [req-8635c390-62cc-488e-a2a2-0fead086ca27 req-c58a334b-523f-4ccc-aba6-8abd6fc2a220 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.645 189020 DEBUG oslo_concurrency.lockutils [req-8635c390-62cc-488e-a2a2-0fead086ca27 req-c58a334b-523f-4ccc-aba6-8abd6fc2a220 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.646 189020 DEBUG oslo_concurrency.lockutils [req-8635c390-62cc-488e-a2a2-0fead086ca27 req-c58a334b-523f-4ccc-aba6-8abd6fc2a220 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.646 189020 DEBUG nova.compute.manager [req-8635c390-62cc-488e-a2a2-0fead086ca27 req-c58a334b-523f-4ccc-aba6-8abd6fc2a220 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Processing event network-vif-plugged-ca0eba72-b2eb-451a-8aba-98bb05ea6e44 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.647 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.653 189020 DEBUG nova.virt.driver [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] Emitting event <LifecycleEvent: 1771427971.652956, 2177a803-311a-47ef-8beb-465c67ce1bdc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.654 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] VM Resumed (Lifecycle Event)#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.657 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.664 189020 INFO nova.virt.libvirt.driver [-] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Instance spawned successfully.#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.665 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.675 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.696 189020 DEBUG nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.700 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.700 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.701 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.701 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.702 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.702 189020 DEBUG nova.virt.libvirt.driver [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.729 189020 INFO nova.compute.manager [None req-24ee7903-92dc-4e56-96de-929f64fab748 - - - - - -] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.774 189020 INFO nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Took 101.70 seconds to spawn the instance on the hypervisor.#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.775 189020 DEBUG nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.883 189020 INFO nova.compute.manager [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Took 103.19 seconds to build instance.#033[00m
Feb 18 15:19:31 compute-0 nova_compute[189016]: 2026-02-18 15:19:31.925 189020 DEBUG oslo_concurrency.lockutils [None req-096db5ce-73aa-4915-bc69-54dd27e9ed51 959521a7f37a426ba0b225e559599f65 72f850c210f143e1a4fcb26746366722 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 103.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:19:32 compute-0 nova_compute[189016]: 2026-02-18 15:19:32.786 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:33 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:33.501 108400 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bff0df27-aa33-4d98-b417-cc9248f7a486, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 18 15:19:33 compute-0 nova_compute[189016]: 2026-02-18 15:19:33.848 189020 DEBUG nova.compute.manager [req-cc7084ff-3c04-47f9-bd10-527b07a4004c req-88ea1168-1116-4416-a588-fb931f4a8052 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Received event network-vif-plugged-ca0eba72-b2eb-451a-8aba-98bb05ea6e44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 18 15:19:33 compute-0 nova_compute[189016]: 2026-02-18 15:19:33.850 189020 DEBUG oslo_concurrency.lockutils [req-cc7084ff-3c04-47f9-bd10-527b07a4004c req-88ea1168-1116-4416-a588-fb931f4a8052 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Acquiring lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:19:33 compute-0 nova_compute[189016]: 2026-02-18 15:19:33.851 189020 DEBUG oslo_concurrency.lockutils [req-cc7084ff-3c04-47f9-bd10-527b07a4004c req-88ea1168-1116-4416-a588-fb931f4a8052 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:19:33 compute-0 nova_compute[189016]: 2026-02-18 15:19:33.851 189020 DEBUG oslo_concurrency.lockutils [req-cc7084ff-3c04-47f9-bd10-527b07a4004c req-88ea1168-1116-4416-a588-fb931f4a8052 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] Lock "2177a803-311a-47ef-8beb-465c67ce1bdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:19:33 compute-0 nova_compute[189016]: 2026-02-18 15:19:33.851 189020 DEBUG nova.compute.manager [req-cc7084ff-3c04-47f9-bd10-527b07a4004c req-88ea1168-1116-4416-a588-fb931f4a8052 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] No waiting events found dispatching network-vif-plugged-ca0eba72-b2eb-451a-8aba-98bb05ea6e44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 18 15:19:33 compute-0 nova_compute[189016]: 2026-02-18 15:19:33.852 189020 WARNING nova.compute.manager [req-cc7084ff-3c04-47f9-bd10-527b07a4004c req-88ea1168-1116-4416-a588-fb931f4a8052 af1962588ccc4e8a99cb64521a1571e1 a4846a7773f34370adcc9d55e462cc63 - - default default] [instance: 2177a803-311a-47ef-8beb-465c67ce1bdc] Received unexpected event network-vif-plugged-ca0eba72-b2eb-451a-8aba-98bb05ea6e44 for instance with vm_state active and task_state None.#033[00m
Feb 18 15:19:35 compute-0 nova_compute[189016]: 2026-02-18 15:19:35.526 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:37 compute-0 nova_compute[189016]: 2026-02-18 15:19:37.790 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:37 compute-0 podman[254617]: 2026-02-18 15:19:37.817050355 +0000 UTC m=+0.118551269 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 18 15:19:37 compute-0 podman[254618]: 2026-02-18 15:19:37.818146312 +0000 UTC m=+0.116219999 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, name=ubi9/ubi-minimal, release=1770267347, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter)
Feb 18 15:19:40 compute-0 nova_compute[189016]: 2026-02-18 15:19:40.530 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:41.467 108400 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:19:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:41.469 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:19:41 compute-0 ovn_metadata_agent[108395]: 2026-02-18 15:19:41.471 108400 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:19:42 compute-0 nova_compute[189016]: 2026-02-18 15:19:42.794 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:43 compute-0 podman[254660]: 2026-02-18 15:19:43.767660385 +0000 UTC m=+0.078861535 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Feb 18 15:19:43 compute-0 podman[254659]: 2026-02-18 15:19:43.775860052 +0000 UTC m=+0.088256142 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 18 15:19:43 compute-0 podman[254658]: 2026-02-18 15:19:43.791558049 +0000 UTC m=+0.103939059 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:19:45 compute-0 nova_compute[189016]: 2026-02-18 15:19:45.535 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:47 compute-0 nova_compute[189016]: 2026-02-18 15:19:47.796 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:50 compute-0 nova_compute[189016]: 2026-02-18 15:19:50.539 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:51 compute-0 podman[254724]: 2026-02-18 15:19:51.754692843 +0000 UTC m=+0.068519353 container health_status a3aea2caab65e558a2f4953ff645a896b1078066115018acf9371cb459f9564e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 18 15:19:51 compute-0 podman[254723]: 2026-02-18 15:19:51.801371743 +0000 UTC m=+0.114228999 container health_status 126c16e7ae0e7cbfd1b23fc8b11002573547e6c1359bf4666c467c40c6634630 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260216, tcib_build_tag=b4e52d4cfa1d999fd2ca460155017644, org.label-schema.schema-version=1.0)
Feb 18 15:19:52 compute-0 nova_compute[189016]: 2026-02-18 15:19:52.799 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:55 compute-0 nova_compute[189016]: 2026-02-18 15:19:55.544 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:55 compute-0 podman[254769]: 2026-02-18 15:19:55.808845165 +0000 UTC m=+0.130140961 container health_status b23e3a2629c062b6583d1f8ab3dece257a12aa98053f423536a4e2e69fc7e164 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 18 15:19:57 compute-0 ovn_controller[99062]: 2026-02-18T15:19:57Z|00118|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 18 15:19:57 compute-0 nova_compute[189016]: 2026-02-18 15:19:57.803 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:19:59 compute-0 nova_compute[189016]: 2026-02-18 15:19:59.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:19:59 compute-0 nova_compute[189016]: 2026-02-18 15:19:59.053 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 18 15:19:59 compute-0 podman[204930]: time="2026-02-18T15:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 18 15:19:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31705 "" "Go-http-client/1.1"
Feb 18 15:19:59 compute-0 podman[204930]: @ - - [18/Feb/2026:15:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5306 "" "Go-http-client/1.1"
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.046 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.050 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.051 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 18 15:20:00 compute-0 systemd-logind[831]: New session 31 of user zuul.
Feb 18 15:20:00 compute-0 systemd[1]: Started Session 31 of User zuul.
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.443 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.444 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquired lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.444 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.444 189020 DEBUG nova.objects.instance [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0914ee8e-421d-4e49-958e-4e659b7fdc22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 18 15:20:00 compute-0 nova_compute[189016]: 2026-02-18 15:20:00.547 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:20:01 compute-0 openstack_network_exporter[208107]: ERROR   15:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 18 15:20:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:20:01 compute-0 openstack_network_exporter[208107]: ERROR   15:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 18 15:20:01 compute-0 openstack_network_exporter[208107]: 
Feb 18 15:20:02 compute-0 nova_compute[189016]: 2026-02-18 15:20:02.012 189020 DEBUG nova.network.neutron [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updating instance_info_cache with network_info: [{"id": "7514447c-f6a7-4670-817a-906ea6344789", "address": "fa:16:3e:34:24:0f", "network": {"id": "7799c1f7-b42b-4f46-a4f0-f189be986a35", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-561283655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f278181458244cb4836c191782a17069", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7514447c-f6", "ovs_interfaceid": "7514447c-f6a7-4670-817a-906ea6344789", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 18 15:20:02 compute-0 nova_compute[189016]: 2026-02-18 15:20:02.044 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Releasing lock "refresh_cache-0914ee8e-421d-4e49-958e-4e659b7fdc22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 18 15:20:02 compute-0 nova_compute[189016]: 2026-02-18 15:20:02.045 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] [instance: 0914ee8e-421d-4e49-958e-4e659b7fdc22] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 18 15:20:02 compute-0 nova_compute[189016]: 2026-02-18 15:20:02.046 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:02 compute-0 nova_compute[189016]: 2026-02-18 15:20:02.805 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:20:03 compute-0 nova_compute[189016]: 2026-02-18 15:20:03.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:05 compute-0 nova_compute[189016]: 2026-02-18 15:20:05.552 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:20:05 compute-0 ovs-vsctl[254990]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 18 15:20:06 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 254824 (sos)
Feb 18 15:20:06 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 18 15:20:06 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 18 15:20:07 compute-0 ovn_controller[99062]: 2026-02-18T15:20:07Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:fb:1b 10.100.0.6
Feb 18 15:20:07 compute-0 ovn_controller[99062]: 2026-02-18T15:20:07Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:fb:1b 10.100.0.6
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.051 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.052 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:07 compute-0 virtqemud[188343]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.244 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.245 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.245 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.245 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 18 15:20:07 compute-0 virtqemud[188343]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 18 15:20:07 compute-0 virtqemud[188343]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.360 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.427 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.429 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.499 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0914ee8e-421d-4e49-958e-4e659b7fdc22/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.514 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.599 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.600 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.679 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2177a803-311a-47ef-8beb-465c67ce1bdc/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.687 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.758 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.760 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.808 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:20:07 compute-0 nova_compute[189016]: 2026-02-18 15:20:07.823 189020 DEBUG oslo_concurrency.processutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/538e968b-7f01-4e6b-af67-182df12fedec/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 18 15:20:08 compute-0 podman[255264]: 2026-02-18 15:20:08.142869379 +0000 UTC m=+0.145964972 container health_status 4c906f47378c7f93f033211bc20b8ef0076006c3fb93d8446f7df25c8d08878c (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 18 15:20:08 compute-0 podman[255266]: 2026-02-18 15:20:08.155371455 +0000 UTC m=+0.154524028 container health_status 4dbdce9c0ad1ac30be7e1782a2ac0919f554cd276a00f8ce6ae57eb6e4f8f165 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'd2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-269487b8ce61d83ea4ecebd3a1867ac9ca611efee1213aefce612c11bb986535'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, io.openshift.expose-services=, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.433 189020 WARNING nova.virt.libvirt.driver [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.437 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4707MB free_disk=72.11910629272461GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.437 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.438 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.652 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 0914ee8e-421d-4e49-958e-4e659b7fdc22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.653 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 538e968b-7f01-4e6b-af67-182df12fedec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.653 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Instance 2177a803-311a-47ef-8beb-465c67ce1bdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.653 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.654 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 18 15:20:08 compute-0 nova_compute[189016]: 2026-02-18 15:20:08.943 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing inventories for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.165 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating ProviderTree inventory for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.166 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Updating inventory in ProviderTree for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.253 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing aggregate associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.288 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Refreshing trait associations for resource provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380, traits: HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.412 189020 DEBUG nova.compute.provider_tree [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed in ProviderTree for provider: 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.506 189020 DEBUG nova.scheduler.client.report [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Inventory has not changed for provider 7d5f91f3-cf81-4de6-86b4-ce92bbe09380 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.570 189020 DEBUG nova.compute.resource_tracker [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.571 189020 DEBUG oslo_concurrency.lockutils [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.571 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:09 compute-0 nova_compute[189016]: 2026-02-18 15:20:09.572 189020 DEBUG nova.compute.manager [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 18 15:20:10 compute-0 nova_compute[189016]: 2026-02-18 15:20:10.556 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:20:10 compute-0 nova_compute[189016]: 2026-02-18 15:20:10.610 189020 DEBUG oslo_service.periodic_task [None req-b45bb3ca-5d91-4d7f-ad56-0286c899bd21 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 18 15:20:11 compute-0 kernel: /proc/cgroups lists only v1 controllers, use cgroup.controllers of root cgroup for v2 info
Feb 18 15:20:11 compute-0 systemd[1]: Starting Hostname Service...
Feb 18 15:20:12 compute-0 systemd[1]: Started Hostname Service.
Feb 18 15:20:12 compute-0 nova_compute[189016]: 2026-02-18 15:20:12.810 189020 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 18 15:20:14 compute-0 podman[255866]: 2026-02-18 15:20:14.660541997 +0000 UTC m=+0.115980823 container health_status 99249c8a4bf78113c85889ad1df7dc59407dc90fbe42ad041eb401b8939a3a05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 18 15:20:14 compute-0 podman[255869]: 2026-02-18 15:20:14.667192556 +0000 UTC m=+0.121820031 container health_status 9e69fc5475fd3e9961f117ff2b1b13b26ff2c09cda5d5f2f3539073c6608ae22 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '414f146c4f668e565715023853feabf892c3ed008a9390a77bf881f573269d5e-d2cc9115bd19bf5c05e2151d3a5d61f2bc919ac120f216fe7b50ab7fbd718c1d-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49-dc1dab742c0e2889f07eb67f2ea1dfe816655194c548049e789aeebd4b3f5a49'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 18 15:20:14 compute-0 podman[255870]: 2026-02-18 15:20:14.69783718 +0000 UTC m=+0.149784588 container health_status a0727a3ec4ac197fce396e6e635f4bd45f1fe2246db9076fa086890304ef114d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
